Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,824)

Search Parameters:
Keywords = hybrid deep learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 4935 KB  
Article
Deep Unsupervised Learning for Indoor Fire Detection Using Wi-Fi Signals
by Sara Mostofi, Fatih Yesevi Okur, Ahmet Can Altunişik and Ertugrul Taciroğlu
Fire 2026, 9(5), 189; https://doi.org/10.3390/fire9050189 - 1 May 2026
Abstract
This study proposes a sensor-free approach for indoor fire detection that leverages existing Wi-Fi infrastructure as a passive sensing modality. By extracting Channel State Information (CSI) from prevalent 802.11n Wi-Fi signals and applying an unsupervised deep anomaly detection model, the approach conceptualizes the [...] Read more.
This study proposes a sensor-free approach for indoor fire detection that leverages existing Wi-Fi infrastructure as a passive sensing modality. By extracting Channel State Information (CSI) from prevalent 802.11n Wi-Fi signals and applying an unsupervised deep anomaly detection model, the approach conceptualizes the wireless environment as a sensorless detection field capable of identifying combustion-induced perturbations without requiring any physical sensors. CSI data were collected in both normal and flame-induced states under three combustion conditions (gasoline, wood, plastic), each introducing unique signal perturbations. These data, which exhibit diverse signal perturbations, were used as input to four unsupervised deep anomaly detection architectures: a variational autoencoder (VAE), a 1D convolutional autoencoder (CNN-AE), a long short-term memory autoencoder (LSTM-AE), and a hybrid CNN-LSTM autoencoder. Each architecture was trained exclusively on baseline data to learn compact latent representations of normal signal patterns. Among the evaluated architectures, CNN-AE achieved perfect detection across all scenarios, showing high responsiveness to signal disruptions. LSTM-AE tracks prolonged combustion but struggles with fast-onset anomalies. VAE maintains low error during baseline but misses sharp deviations. These findings validate that Wi-Fi CSI encodes latent combustion features. The method requires no additional sensors and operates on existing signals, enabling scalable smart building integration via lightweight software updates. Full article
33 pages, 1208 KB  
Article
Hybrid Model-Based Framework for Real-Time Adaptive Traffic Signal Control
by Bratislav Lukić, Goran Petrović, Žarko Ćojbašić, Dragan Marinković and Srđan Dimić
Future Transp. 2026, 6(3), 100; https://doi.org/10.3390/futuretransp6030100 - 1 May 2026
Abstract
Real-time traffic signal control represents a key challenge in modern intelligent transportation systems, particularly under highly variable traffic flows and the presence of priority vehicles. This study proposes a hybrid framework for adaptive signal plan control at a signalized intersection. The framework integrates [...] Read more.
Real-time traffic signal control represents a key challenge in modern intelligent transportation systems, particularly under highly variable traffic flows and the presence of priority vehicles. This study proposes a hybrid framework for adaptive signal plan control at a signalized intersection. The framework integrates deep learning-based traffic prediction, surrogate-based performance evaluation, and reinforcement learning-based adaptive control. Short-term traffic flow is predicted using recurrent neural networks, providing anticipatory information for traffic control decisions. Based on predicted flows and generated candidate signal plans, a machine learning surrogate model enables fast estimation of key performance indicators, including average vehicle delay and queue length. Adaptive control is implemented using the Proximal Policy Optimization algorithm within the SUMO environment via TraCI, which enables real-time fine-tuning of signal phases. A dedicated priority and stability module ensures effective emergency vehicle preemption and adaptive public transport priority while preserving intersection stability. Simulation results show that the proposed framework reduces average vehicle delay by up to 35% compared with FT and by up to 15% compared with standalone RL, while also improving traffic flow efficiency and priority vehicle performance. Full article
(This article belongs to the Special Issue Intelligent Vision Technologies in Traffic Surveillance Systems)
31 pages, 6203 KB  
Article
Hybrid Wavelet–CNN Framework for Intelligent Valve Stiction Detection in Control Loops
by Shaveen Maharaj, Nelendran Pillay, Kevin Emanuel Moorgas and Navin Singh
Actuators 2026, 15(5), 249; https://doi.org/10.3390/act15050249 - 30 Apr 2026
Abstract
Valve stiction remains a persistent nonlinear phenomenon in industrial control loops, often inducing limit-cycle oscillations that degrade control performance, compromise stability, and reduce process efficiency. Reliable detection of stiction is therefore essential for condition-based maintenance and improved operational performance. This study proposes a [...] Read more.
Valve stiction remains a persistent nonlinear phenomenon in industrial control loops, often inducing limit-cycle oscillations that degrade control performance, compromise stability, and reduce process efficiency. Reliable detection of stiction is therefore essential for condition-based maintenance and improved operational performance. This study proposes a Hybrid Wavelet–Convolutional Neural Network (HW-CNN) framework for the detection of valve stiction in closed-loop systems. The approach employs the continuous wavelet transform (CWT) to generate time–frequency scalograms that preserve localized energy distributions associated with stick–slip behavior, including transient release events and sustained oscillatory patterns. These representations are subsequently processed using a fine-tuned deep residual neural network to enable automated feature extraction and classification. Unlike conventional signal-based or generic time–frequency learning approaches, the proposed framework is designed to retain control system-specific dynamics within the feature representation, thereby improving the separability of stiction-induced signatures under varying operating conditions. The methodology is evaluated using both simulated control loop data and real industrial datasets obtained from the International Stiction Database (ISDB), ensuring evaluation under controlled and practical conditions. To enhance reliability, performance metrics are reported as averages over repeated experimental runs. The results demonstrate that the proposed HW-CNN framework achieves an accuracy of 96.1% and an F1-score of 96.0% on simulated datasets, and 90.4% accuracy with an F1-score of 90.0% on industrial data. Additional analysis indicates that the model maintains consistent detection capability despite increased variability in real-world conditions. Furthermore, interpretability is supported through Grad-CAM analysis, which shows that the network focuses on physically meaningful regions within the scalograms corresponding to known stiction dynamics. The findings confirm that the integration of wavelet-based feature encoding with deep residual learning provides a robust and interpretable framework for valve stiction detection. Full article
(This article belongs to the Section Control Systems)
Show Figures

Figure 1

27 pages, 23053 KB  
Article
CNN–Attention–LSTM with Bayesian Optimization for Multi-Level Sump Well Anomaly Early Warning
by Yining Lin and Changchun Cai
Mathematics 2026, 14(9), 1528; https://doi.org/10.3390/math14091528 - 30 Apr 2026
Abstract
Reliable anomaly early warning for hydropower station sump wells remains challenging due to the strong nonlinearity of water level dynamics and the limited adaptability of conventional fixed-threshold alarms. Here, we present a hybrid deep learning framework—termed CNN–Attention–LSTM–BO—that fuses multi-scale local feature extraction, adaptive [...] Read more.
Reliable anomaly early warning for hydropower station sump wells remains challenging due to the strong nonlinearity of water level dynamics and the limited adaptability of conventional fixed-threshold alarms. Here, we present a hybrid deep learning framework—termed CNN–Attention–LSTM–BO—that fuses multi-scale local feature extraction, adaptive temporal weighting, and sequential dependency modeling within a unified architecture, with all critical hyperparameters tuned via Bayesian optimization. A four-dimensional input representation is first constructed from the raw water level signal and its first- and second-order differences together with the drainage pump operating state, capturing both trend and transient information. One-dimensional convolutions at multiple kernel scales encode short-range fluctuation patterns, a Bahdanau-style temporal attention layer selectively amplifies informative time steps, and a stacked LSTM propagates long-horizon risk dependencies. At the decision stage, a dual dynamic thresholding scheme couples an improved 3σ criterion with kernel density estimation (KDE) to partition the smoothed risk score into three graded alert levels (normal/warning/critical), replacing the binary alarm paradigm. Experiments on the SWaT benchmark yield an average area under the ROC curve (AUC) of 0.9246, an average Accuracy of 0.8812, and a best single-well false alarm rate (FAR) of 3.21% (Well-4), with an average FAR of 8.97% across three wells, outperforming both traditional limit-value alarms and ablated variants lacking CNN or attention modules. Full article
34 pages, 2208 KB  
Review
Next-Generation Artificial Intelligence Strategies for Mechanistic Cancer Target Discovery and Drug Development: A State-of-the-Art Review
by Muhammad Sohail Khan, Muhammad Saeed, Muhammad Arham, Imran Zafar, Majid Hussian, Adil Jamal, Muhammad Usman, Fayez Saeed Bahwerth, Gabsik Yang and Ki Sung Kang
Int. J. Mol. Sci. 2026, 27(9), 4028; https://doi.org/10.3390/ijms27094028 - 30 Apr 2026
Abstract
Artificial intelligence (AI) is increasingly used in cancer research, enabling integrative analysis of complex biomedical data to identify actionable therapeutic vulnerabilities. This review specifically examines how AI advances mechanistic cancer target discovery and translational drug development, focusing on: (1) the processing of large-scale [...] Read more.
Artificial intelligence (AI) is increasingly used in cancer research, enabling integrative analysis of complex biomedical data to identify actionable therapeutic vulnerabilities. This review specifically examines how AI advances mechanistic cancer target discovery and translational drug development, focusing on: (1) the processing of large-scale genomics, transcriptomics, proteomics, metabolomics, single-cell profiling, spatial, and clinical datasets using machine learning (ML) and deep learning (DL) algorithms; (2) the identification of candidate biomarkers, driver genes, dysregulated pathways, tumor dependencies, and molecular targets that traditional methods often miss; (3) the integration of multi-omics data, network biology, causal inference, and systems-level modeling to refine mechanistic understanding of cancer progression and separate functional driver events from passengers; and (4) applications in drug development, including virtual screening, molecular modeling, structure-informed target validation, drug repurposing, synthetic lethality prediction, and de novo drug design, which collectively may enhance early-stage drug discovery efficiency. The review underscores that AI serves as both a predictive tool and a platform for linking molecular mechanisms to hypothesis generation, target prioritization, and rational treatment design. Challenges such as data heterogeneity, algorithmic bias, interpretability, reproducibility, regulatory requirements, and patient privacy must be addressed for robust translation and clinical use. Future directions may focus on hybrid approaches that integrate causal modeling, explainable AI, multimodal data, and experimental validation to yield mechanistically grounded, clinically actionable insights. AI-driven approaches ultimately aim to accelerate mechanism-based cancer target discovery and enable more precise, biologically informed anticancer therapies. Full article
30 pages, 1639 KB  
Article
Robust and Calibrated ECG Heartbeat Classification via Hybrid Convolutional, Temporal and Attention-Based Learning
by Jyoti Rani, Shilpa Gupta and Vikas Mittal
Appl. Sci. 2026, 16(9), 4393; https://doi.org/10.3390/app16094393 - 30 Apr 2026
Abstract
Electrocardiogram (ECG) heartbeat classification is an essential component of automated arrhythmia detection and intelligent cardiac monitoring systems. Traditionally, ECG analysis has depended on manual interpretation by clinicians and conventional machine learning approaches based on handcrafted features, which are labor-intensive, noise-sensitive, and inadequate for [...] Read more.
Electrocardiogram (ECG) heartbeat classification is an essential component of automated arrhythmia detection and intelligent cardiac monitoring systems. Traditionally, ECG analysis has depended on manual interpretation by clinicians and conventional machine learning approaches based on handcrafted features, which are labor-intensive, noise-sensitive, and inadequate for capturing complex nonlinear morphological and temporal characteristics of ECG signals. Furthermore, real-world ECG datasets are highly imbalanced, noisy, and exhibit overlapping waveform patterns across heartbeat classes, leading to biased learning, poor minority class detection, and unreliable predictions. To address these challenges, this paper presents a calibration-aware, reliability-oriented evaluation framework for ECG heartbeat classification, incorporating hybrid deep learning architectures that combine convolutional feature extraction, bidirectional GRU-based temporal modeling, and attention mechanisms. The framework assesses probabilistic reliability using calibration metrics, such as the Brier Score and Expected Calibration Error (ECE), rather than explicitly modeling predictive uncertainty methods. Experimental results on the ECG Heartbeat dataset show that CNN achieves the highest testing accuracy (98.44%), largely due to strong performance on the majority class in an imbalanced setting. Among hybrid approaches, a representative hybrid CNN + BiGRU + Attention model attains a competitive accuracy of 97.80%, along with a higher macro F1-score (0.9052), improved training stability, and good calibration behavior (Brier Score = 0.0417, ECE = 0.1023). As the experiments are conducted on preprocessed, fixed-length segments, the results reflect performance under controlled conditions rather than real-world clinical deployment conditions and should therefore be interpreted as a benchmark-level evaluation. Furthermore, no single model consistently outperforms others across all evaluation criteria, as different metrics capture distinct aspects of performance. Full article
34 pages, 3638 KB  
Article
Multi-Scale Hybrid Attention Temporal Network for Motionless Activity Using Smartphone Inertial Sensors
by Sakorn Mekruksavanich and Anuchit Jitpattanakul
Technologies 2026, 14(5), 272; https://doi.org/10.3390/technologies14050272 - 30 Apr 2026
Abstract
Wearable sensor-based human activity recognition (HAR) has gained growing significance in healthcare monitoring and assisted living systems. Although considerable advances have been made in classifying dynamic movements, stationary activities—such as sleeping, driving, and watching TV—remain difficult to distinguish owing to their weak sensor [...] Read more.
Wearable sensor-based human activity recognition (HAR) has gained growing significance in healthcare monitoring and assisted living systems. Although considerable advances have been made in classifying dynamic movements, stationary activities—such as sleeping, driving, and watching TV—remain difficult to distinguish owing to their weak sensor signatures and limited discriminative cues. This paper presents the multi-scale hybrid attention temporal network (MHAT-Net), a deep learning framework whose key architectural novelty lies in the parallel (non-sequential) dual-pathway temporal modeling: a BiGRU branch and a transformer encoder branch operate simultaneously on the same spatially encoded representation, combined via a learnable attention-based fusion module. This design targets the underexplored problem of distinguishing stationary activities from weak inertial sensor signatures. The architecture is built upon three integrated components: (1) a multi-branch CNN with kernel sizes three, five, and seven combined with channel attention for adaptive spatial feature extraction across multiple temporal scales; (2) parallel bidirectional gated recurrent unit (BiGRU) and transformer encoder pathways for jointly capturing short-range sequential patterns and long-range temporal correlations; and (3) an attention-driven fusion module that adaptively weights the outputs of both temporal branches. The model was assessed on a publicly available benchmark comprising three motionless activity categories collected from 25 participants via smartphone sensors. In 5-fold cross-validation, MHAT-Net attained 97.42% (±4.69%) accuracy with accelerometer data and 92.31% (±0.31%) with gyroscope data, substantially exceeding the accuracies of five baseline architectures: CNN, LSTM, BiLSTM, GRU, and BiGRU. Ablation experiments identified multi-scale spatial feature extraction as the most influential module (2.21–2.47% contribution), followed by the hybrid temporal modeling components. Cross-modality analysis confirmed that accelerometer signals yielded richer discriminative content for stationary activities, while MHAT-Net sustained consistent performance across both sensor types. The proposed integration of multi-scale spatial encoding, hybrid temporal modeling, and multi-level attention gives MHAT-Net the ability to reliably detect subtle activity-specific patterns, establishing a new benchmark in wearable sensor-based recognition for comprehensive daily behavior monitoring. Full article
Show Figures

Figure 1

26 pages, 36181 KB  
Article
A Hybrid U-Net and Attention-Based BiLSTM Framework for Wildfire Prediction Using Multi-Source Remote Sensing and Meteorological Sensor Data
by Zhiyu Chen, Weiwei Song, Xiaoqing Zuo, Siyuan Li, Huyue Chen and Bowen Zuo
Electronics 2026, 15(9), 1893; https://doi.org/10.3390/electronics15091893 - 30 Apr 2026
Abstract
Forest and grassland fires have become increasingly severe under climate change, posing significant threats to ecosystems and human safety. Accurate wildfire prediction using remote sensing data remains challenging due to complex spatiotemporal dynamics and heterogeneous data sources. To address this issue, this study [...] Read more.
Forest and grassland fires have become increasingly severe under climate change, posing significant threats to ecosystems and human safety. Accurate wildfire prediction using remote sensing data remains challenging due to complex spatiotemporal dynamics and heterogeneous data sources. To address this issue, this study proposes a hybrid deep learning framework integrating U-Net and an attention-enhanced bidirectional long short-term memory network (AUBLSTM) for spatiotemporal wildfire prediction using multi-source remote sensing and meteorological data. The U-Net is employed for spatial feature extraction, while AUBLSTM captures temporal dependencies and improves fire spread modeling with attention mechanisms. An encoder–decoder architecture is adopted to enhance multi-scale feature representation, and meteorological constraints are incorporated to improve physical consistency. Experimental results demonstrate that the proposed model outperforms baseline methods, including convolutional long short-term memory (ConvLSTM) and fully connected networks, achieving superior performance in terms of MSE, RRMSE, PSNR, SSIM, IoU, and F1-Score. The framework is efficient, scalable, and suitable for deployment in electronic monitoring and early warning systems, providing a practical solution for integrating multi-source data into wildfire surveillance applications. Full article
Show Figures

Figure 1

22 pages, 5221 KB  
Article
Hybrid Deep Neural Network with Natural Language Processing Techniques to Analyze Customer Satisfaction with Delivery Platform Manager Responses
by Salihah Alotaibi
Appl. Sci. 2026, 16(9), 4359; https://doi.org/10.3390/app16094359 - 29 Apr 2026
Viewed by 66
Abstract
Delivery services have drawn much attention and become of topmost significance in urban areas by presenting online food delivery selections for a diversity of dishes from a wide range of restaurants, decreasing both travel and waiting times. Customer data analysis acts as a [...] Read more.
Delivery services have drawn much attention and become of topmost significance in urban areas by presenting online food delivery selections for a diversity of dishes from a wide range of restaurants, decreasing both travel and waiting times. Customer data analysis acts as a cornerstone in corporate strategy, allowing enterprises to gather and interpret user feedback and helping them to make informed decisions that drive future business development. However, major knowledge gaps remain due to the scarcity of literature review studies on these delivery services, hindering a complete understanding of customer satisfaction in this sector. Furthermore, there has been little systematic research on managerial response tactics to online consumer complaints and negative reviews. Researchers have contributed by applying artificial intelligence, including deep learning and machine learning models, to analyze customer sentiment and understand customer brand perceptions. This study presents a Hybrid Deep Neural Network Model for Customer Satisfaction Analysis (HDNNM-CSA), with the aim of developing an efficient model which is capable of accurately classifying customer satisfaction levels in delivery apps based on textual responses provided by customer experience managers. To achieve this, the model initially pre-processes text data using text cleaning, emoji removal, normalization, tokenization, stop word removal, and stemming to clean and standardize the unstructured text data for further analysis. Following this, term frequency–inverse document frequency-based word embedding is utilized to transform the pre-processed text into meaningful feature representations. Lastly, an ensemble architecture involving bidirectional long short-term memory, temporal convolutional, and graph convolutional networks is deployed to classify customer satisfaction levels with managers’ responses. A series of experimental analyses are performed, and the results are examined for numerous features. A comparative analysis demonstrates the enhanced performance of the HDNNM-CSA technique with respect to existing approaches. Full article
Show Figures

Figure 1

28 pages, 2426 KB  
Article
Mathematical Modeling of Transit Network Spatiotemporal Dynamics: A Data-Efficient Graph-BiLSTM Architecture for Segment-Level Travel Time Forecasting
by Aigerim Mansurova, Aigul Adamova, Daniyar Rakhymzhanov and Didar Yedilkhan
Mathematics 2026, 14(9), 1503; https://doi.org/10.3390/math14091503 - 29 Apr 2026
Viewed by 1
Abstract
This study addresses the challenge of accurate short-term bus travel time prediction in data-constrained urban environments. While many existing models rely on extensive external data, we propose a data-efficient spatiotemporal framework that integrates sequential deep learning with multi-relational graph modeling using only standard [...] Read more.
This study addresses the challenge of accurate short-term bus travel time prediction in data-constrained urban environments. While many existing models rely on extensive external data, we propose a data-efficient spatiotemporal framework that integrates sequential deep learning with multi-relational graph modeling using only standard operational transit data. The approach is implemented through a multi-relational graph spatio-temporal bidirectional LSTM (MRG-ST-BiLSTM) model, which captures temporal dependencies while preserving the physical topology of the road network. A real-world case study was conducted using GTFS-based data from bus operations in the Greater Sydney Area, covering five routes and more than four million stop-level records. In the real road network test, numerical experiments demonstrated that the proposed model achieved improved prediction accuracy compared to baseline methods. Specifically, it achieved an MAE of 15.82 s and RMSE of 31.38 s, outperforming the conventional LSTM (MAE 26.70, RMSE 45.61), Hybrid-BiLSTM (MAE 17.24, RMSE 32.73) and GCN-LSTM (MAE 17.27, RMSE 32.61). The proposed framework provides a scalable and interpretable practical solution for transit agencies. Full article
(This article belongs to the Special Issue Application of Mathematical Modeling and Simulation to Transportation)
33 pages, 3593 KB  
Review
Fiber-Optic Gyroscopes in Modern Navigation Systems: A Comprehensive Review
by Nurzhigit Smailov, Yerlan Tashtay, Pawel Komada, Yerzhan Nussupov, Kanat Zhunussov, Askhat Batyrgaliyev, Daulet Naubetov, Aziskhan Amir, Beibarys Sekenov and Darkhan Yerezhep
Network 2026, 6(2), 28; https://doi.org/10.3390/network6020028 - 29 Apr 2026
Viewed by 27
Abstract
This paper provides a comprehensive overview of the progress in fiber-optic gyroscope technology, covering 260 key studies of the last ten years. A critical comparative analysis of fiber-optic gyroscope with alternative inertial sensors (Micro-Electro-Mechanical Systems, Hemispherical Resonator Gyroscope, Ring Laser Gyroscope) has been [...] Read more.
This paper provides a comprehensive overview of the progress in fiber-optic gyroscope technology, covering 260 key studies of the last ten years. A critical comparative analysis of fiber-optic gyroscope with alternative inertial sensors (Micro-Electro-Mechanical Systems, Hemispherical Resonator Gyroscope, Ring Laser Gyroscope) has been carried out. Confirming the unique advantages of fiber-optic gyroscope for autonomous navigation. Fundamental limitations of accuracy are considered in detail: temperature drifts, polarization noise, and Rayleigh backscattering. Modern hardware methods for suppressing these errors, including the use of photonic crystal and hollow fibers (Air-Core/Hollow-Core), are also considered in this work. The central place in the review is occupied by the analysis of the technological paradigm shift from bulky discrete circuits to hybrid integrated photonics (Indium Phosphide, Silicon Nitride, Lithium Niobate) and hybrid architectures to reduce weight and size characteristics. The role of artificial intelligence (Deep Learning, Long Short-Term Memory) methods in nonlinear drift compensation and calibration is discussed. The usage of the Brillouin effect and optomechanics promising areas are outlined, necessary to create a new generation of navigation systems operating in the absence of Global Navigation Satellite Systems signals. Full article
Show Figures

Figure 1

37 pages, 2045 KB  
Article
A Hybrid Artificial Intelligence Framework for Reliable and Seamless Vertical Handover in Next-Generation Heterogeneous Networks
by Sunisa Kunarak
Big Data Cogn. Comput. 2026, 10(5), 139; https://doi.org/10.3390/bdcc10050139 - 29 Apr 2026
Viewed by 32
Abstract
Next-generation heterogeneous wireless networks (HetNets) comprising LTE macro-cells, 5G New Radio (NR) small cells, and WiFi 6 access points aim to provide seamless connectivity under diverse mobility scenarios. However, vertical handover (VHO) remains a performance bottleneck because of the highly variable radio environments, [...] Read more.
Next-generation heterogeneous wireless networks (HetNets) comprising LTE macro-cells, 5G New Radio (NR) small cells, and WiFi 6 access points aim to provide seamless connectivity under diverse mobility scenarios. However, vertical handover (VHO) remains a performance bottleneck because of the highly variable radio environments, dynamic user mobility, stringent quality of service (QoS) requirements, and the coexistence of multi-tier access technologies. Existing handover approaches based on deep learning and deep reinforcement learning (DRL) suffer from limitations: deep learning models lack decision-making capabilities, whereas DRL models, particularly deep Q-network (DQN)-based policies, face Q-value overestimation and unstable convergence. To overcome these limitations, this paper introduces a Hybrid deep double-Q networks (DDQN)–bidirectional long short-term memory (Bi-LSTM) Framework that integrates bi-directional mobility prediction and DRL-based adaptive decision-making. The Bi-LSTM module captures forward and backward temporal dependencies and predicts future Received Signal Strength (RSS) trajectories, mobility dynamics, and cell-edge transitions. The DDQN module stabilizes the action value estimation, mitigates overestimation bias, and enables context-aware handover decisions. A multi-tier simulation environment consisting of LTE, 5G NR, and WiFi 6 networks was developed using realistic path loss, shadowing, interference, and mobility models. Extensive evaluations demonstrated substantial improvements in mobility prediction accuracy, handover stability, radio link reliability, throughput efficiency, and latency reduction compared to conventional RSS-based and DQN-based schemes. The findings highlight the effectiveness of integrating predictive intelligence with reinforcement learning for reliable mobility management in 5G-Advanced and emerging 6G networks. Full article
41 pages, 5641 KB  
Article
High-Density PCB for On-Edge AI: Energy Harvesting, Thermal Management, and Sensor Fusion for UAVs in Clinical–Urban Missions
by Luigi Bibbo’, Giuliana Bilotta and Giovanni Angiulli
Electronics 2026, 15(9), 1885; https://doi.org/10.3390/electronics15091885 - 29 Apr 2026
Viewed by 48
Abstract
Unmanned aerial vehicles (UAVs) for urban and clinical–logistics missions operate under severe constraints in onboard energy, computation, and payload integrity. Addressing these challenges requires not only advanced algorithms but also a tight integration between embedded hardware, energy management, perception, and decision-making. This paper [...] Read more.
Unmanned aerial vehicles (UAVs) for urban and clinical–logistics missions operate under severe constraints in onboard energy, computation, and payload integrity. Addressing these challenges requires not only advanced algorithms but also a tight integration between embedded hardware, energy management, perception, and decision-making. This paper presents a unified UAV platform based on a system-level hardware–software co-design. First, a compact six-layer PCB (85 mm × 55 mm) integrates an NVIDIA Jetson Orin for on-edge artificial intelligence and a dedicated microcontroller for real-time flight control, with explicit power-domain separation, thermal management via arrays, and physical isolation of sensitive sensors. Second, a hybrid energy system combines LiPo batteries with perovskite photovoltaic cells and an MPPT stage with experimentally measured efficiency (94.5%), enabling stable operation under variable irradiance conditions. Third, an autonomous navigation strategy based on a Dueling Double Deep Q Network with Prioritized Experience Replay learns energy-efficient trajectories while explicitly incorporating payload thermal deviation (ΔT) and mechanical jerk into the reward function, thereby supporting clinically safe transport. Experimental validation on the physical platform includes onboard power and latency measurements, statistical evaluation across training and deterministic execution, and mission-level key performance indicators. Results show an average reduction of 18.4% in total energy consumption and a 12.1% increase in operational coverage under representative urban scenarios, with end-to-end decision latency below 50 ms. These findings demonstrate that a tightly integrated design of embedded hardware, hybrid energy management, and clinical-aware reinforcement learning enables robust, efficient, and application-ready UAV systems for urban and healthcare missions. Full article
(This article belongs to the Special Issue Circuit Design for Embedded Systems)
Show Figures

Figure 1

27 pages, 1392 KB  
Article
W-HiTS-Attention: A Unified Wavelet-Hierarchical Residual-Attention Framework for Accurate and Efficient Short-Term Wind Power Forecasting
by Kaoutar Ait Chaoui, Hassan El Fadil and Oumaima Choukai
Technologies 2026, 14(5), 270; https://doi.org/10.3390/technologies14050270 - 29 Apr 2026
Viewed by 38
Abstract
Short-term wind power forecasting is considered a critical challenge in smart grid management due to the nonlinear, unstable, and multi-scale noise characteristics of wind signals. Although recent advances in hybrid deep learning have improved the accuracy of short-term wind power forecasting, many state-of-the-art [...] Read more.
Short-term wind power forecasting is considered a critical challenge in smart grid management due to the nonlinear, unstable, and multi-scale noise characteristics of wind signals. Although recent advances in hybrid deep learning have improved the accuracy of short-term wind power forecasting, many state-of-the-art models usually consider signal denoising, residual decomposition, and attention mechanisms as independent modules without providing a unified solution. This paper proposes an end-to-end solution, W-HiTS-Attention (Wavelet Transform, N-stacked Hierarchical Interpolation for Time Series, Attention), which coherently integrates wavelet denoising, hierarchical residual learning from N-HiTS (Neural Hierarchical Interpolation), and an in-block self-attention mechanism. The proposed solution outperforms 21 benchmarks in accuracy, including state-of-the-art baselines such as N-BEATS, N-HiTS, TCN, Informer, Autoformer, LSTM, BiLSTM, GRU, and Prophet, achieving an RMSE of 55.56 W and an R2 of 0.9918. Furthermore, the results show that the proposed solution is efficient in terms of parameter count (0.033M), latency (0.0036 ms/sample), and training time, making it promising for low-latency inference in resource-constrained environments. The results show that the coherent integration of frequency preprocessing, hierarchical residual forecasting, and attention-based temporal refinement provides a robust, explainable, and deployable solution for short-term wind power forecasting. Full article
36 pages, 14306 KB  
Article
Enhancing SDN Intrusion Detection via Multi-Hybrid Deep Learning Fusion and Explainable AI
by Usman Ahmed and Muhammad Tariq Sadiq
Mathematics 2026, 14(9), 1498; https://doi.org/10.3390/math14091498 - 29 Apr 2026
Viewed by 38
Abstract
Software-defined networking (SDN) represents a paradigm shift in network management, but its centralized control plane introduces new and severe security vulnerabilities. Conventional intrusion detection systems, including signature- and rule-based methods, lack adaptability and interpretability in the face of evolving threats. This paper proposes [...] Read more.
Software-defined networking (SDN) represents a paradigm shift in network management, but its centralized control plane introduces new and severe security vulnerabilities. Conventional intrusion detection systems, including signature- and rule-based methods, lack adaptability and interpretability in the face of evolving threats. This paper proposes a multi-hybrid deep learning fusion ensemble (MHDLFE) to enhance intrusion detection in SDN environments. The framework integrates Deep Neural Networks (DNNs), Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Long Short-Term Memory (LSTM) models via feature fusion and a meta-classifier, thereby improving both detection performance and robustness. To address the critical need for transparency in security systems, the proposed approach incorporates Explainable AI techniques, specifically Shapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), providing interpretable insights into model decisions. The proposed model achieves strong performance on the NSL-KDD and CIC-IDS2017 datasets, attaining near-perfect binary classification scores of 97.91% and 93.30%, and multiclass accuracies of 98.61% and 97.91%, respectively. These results demonstrate that the proposed framework delivers an effective and trustworthy SDN intrusion detection system by combining deep learning, ensemble fusion, and explainable AI to support accurate, transparent, and reliable cybersecurity decision-making. Full article
Back to TopTop