Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (598)

Search Parameters:
Keywords = CNN accelerator

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
53 pages, 5532 KB  
Article
Neural Network Method for Detecting Low-Intensity DDoS Attacks with Stochastic Fragmentation and Its Adaptation to Law Enforcement Activities in the Cyber Protection of Critical Infrastructure Facilities
by Serhii Vladov, Victoria Vysotska, Łukasz Ścisło, Rafał Dymczyk, Oleksandr Posashkov, Mariia Nazarkevych, Oleksandr Yunin, Liliia Bobrishova and Yevheniia Pylypenko
Computers 2026, 15(2), 84; https://doi.org/10.3390/computers15020084 (registering DOI) - 1 Feb 2026
Abstract
This article develops a method for the early detection of low-intensity DDoS attacks based on a three-factor vector metric and implements an applied hybrid neural network traffic analysis system that combines preprocessing stages, competitive pretraining (SOM), a radial basis layer, and an associative [...] Read more.
This article develops a method for the early detection of low-intensity DDoS attacks based on a three-factor vector metric and implements an applied hybrid neural network traffic analysis system that combines preprocessing stages, competitive pretraining (SOM), a radial basis layer, and an associative Grossberg output, followed by gradient optimisation. The initial tools used are statistical online estimates (moving or EWMA estimates), CUSUM-like statistics for identifying small stable shifts, and deterministic signature filters. An algorithm has been developed that aggregates the components of fragmentation, reception intensity, and service availability into a single index. Key features include the physically interpretable features, a hybrid neural network architecture with associative stability and low computational complexity, and built-in mechanisms for adaptive threshold calibration and online training. An experimental evaluation of the developed method using real telemetry data demonstrated high recognition performance of the proposed approach (accuracy is 0.945, AUC is 0.965, F1 is 0.945, localisation accuracy is 0.895, with an average detection latency of 55 ms), with these results outperforming the compared CNN-LSTM and Transformer solutions. The scientific contribution of this study lies in the development of a robust, computationally efficient, and application-oriented solution for detecting low-intensity attacks with the ability to integrate into edge and SOC systems. Practical recommendations for reducing false positives and further improvements through low-training methods and hardware acceleration are also proposed. Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (3rd Edition))
19 pages, 2072 KB  
Article
A Reconfigurable CNN-2D Hardware Architecture for Real-Time Brain Cancer Multi-Classification on FPGA
by Ayoub Mhaouch, Wafa Gtifa, Ibtihel Nouira, Abdessalem Ben Abdelali and Mohsen Machhout
Algorithms 2026, 19(2), 107; https://doi.org/10.3390/a19020107 (registering DOI) - 1 Feb 2026
Abstract
Brain cancer classification using deep learning has gained significant attention due to its potential to improve early diagnosis and treatment planning. In this work, we propose a reconfigurable and hardware-optimized CNN-2D architecture implemented on FPGA for multiclass classification of brain tumors from MRI [...] Read more.
Brain cancer classification using deep learning has gained significant attention due to its potential to improve early diagnosis and treatment planning. In this work, we propose a reconfigurable and hardware-optimized CNN-2D architecture implemented on FPGA for multiclass classification of brain tumors from MRI images. The contribution of this study lies in the development of a lightweight CNN model and a modular hardware design, where three key IP coresConv2D, MaxPooling, and ReLUare architected with parameterizable kernels, efficient dataflow, and optimized memory reuse to support real-time processing on resource-constrained platforms. These IPs are iteratively reconfigured to process each CNN layer, enabling flexibility while maintaining low latency. To evaluate the proposed architecture, we first implement the model in software on a Dual-Core Cortex-A9 processor and then deploy the hardware-accelerated version on an XC7Z020 FPGA. Performance is assessed in terms of execution time, power consumption, and classification accuracy. The FPGA implementation achieves a 93.21% reduction in latency and a 67.5% reduction in power consumption, while maintaining a competitive accuracy of 96.09% compared with 98.43% for the software version. These results demonstrate that the proposed reconfigurable FPGA-based architecture offers a strong balance between accuracy, real-time performance, and energy efficiency, making it highly suitable for embedded brain tumor classification systems. Full article
Show Figures

Figure 1

37 pages, 24380 KB  
Article
Denoising of CT and MRI Images Using Decomposition-Based Curvelet Thresholding and Classical Filtering Techniques
by Mahmoud Nasr, Krzysztof Brzostowski, Rafał Obuchowicz and Adam Piórkowski
Appl. Sci. 2026, 16(3), 1335; https://doi.org/10.3390/app16031335 - 28 Jan 2026
Viewed by 96
Abstract
Medical image denoising is crucial for enhancing the diagnostic accuracy of CT and MRI images. This paper presents a modular hybrid framework that combines multiscale decomposition techniques (Empirical Mode Decomposition, Variational Mode Decomposition, Bidimensional EMD, and Multivariate EMD) with curvelet transform thresholding and [...] Read more.
Medical image denoising is crucial for enhancing the diagnostic accuracy of CT and MRI images. This paper presents a modular hybrid framework that combines multiscale decomposition techniques (Empirical Mode Decomposition, Variational Mode Decomposition, Bidimensional EMD, and Multivariate EMD) with curvelet transform thresholding and traditional spatial filters. The methodology was assessed using a phantom dataset containing regulated Rician noise, clinical CT images rebuilt with sharp (B50f) and medium (B46f) kernels, and MRI scans obtained at various GRAPPA acceleration factors. In phantom trials, MEMD–Curvelet attained the highest SSIM (0.964) and PSNR (28.35 dB), while preserving commendable perceptual scores (NIQE approximately 7.55, BRISQUE around 38.8). In CT images, VMD–Curvelet and MEMD–Curvelet consistently outperformed classical filters, achieving SSIM values over 0.95 and PSNR values above 28 dB, even with sharp-kernel reconstructions. In MRI datasets, MEMD–Curvelet and BEMD–Curvelet reduced perceptual distortion, decreasing NIQE by up to 15% and BRISQUE by 20% compared to Gaussian and median filtering. Deep learning baselines validated the framework’s competitiveness: BM3D attained high fidelity but necessitated 6.65 s per slice, while DnCNN delivered equivalent SSIM (0.958) with a diminished runtime of 2.33 s. The results indicate that the proposed framework excels at noise reduction and structure preservation across various imaging settings, surpassing independent filtering and transform-only methods. Its versatility and efficiency underscore its potential for therapeutic integration in situations necessitating high-quality denoising under limited acquisition conditions. Full article
21 pages, 4321 KB  
Article
A Data Augmentation Method for Shearer Rocker Arm Bearing Fault Diagnosis Based on GA-WT-SDP and WCGAN
by Zhaohong Wu, Shuo Wang, Chang Liu, Haiyang Wu, Jiang Yi, Yusong Pang and Gang Cheng
Machines 2026, 14(2), 144; https://doi.org/10.3390/machines14020144 - 26 Jan 2026
Viewed by 165
Abstract
This work addresses the challenges of inadequate data acquisition and the limited availability of labeled samples for shearer rocker arm bearing faults by developing a data augmentation methodology that synergistically incorporates the Genetic Algorithm-optimized Wavelet Transform Symmetrical Dot Pattern (GA-WT-SDP) with a Wasserstein [...] Read more.
This work addresses the challenges of inadequate data acquisition and the limited availability of labeled samples for shearer rocker arm bearing faults by developing a data augmentation methodology that synergistically incorporates the Genetic Algorithm-optimized Wavelet Transform Symmetrical Dot Pattern (GA-WT-SDP) with a Wasserstein Conditional Generative Adversarial Network (WCGAN). In the initial step, the Genetic Algorithm (GA) is employed to refine the mapping parameters of the Wavelet Transform Symmetrical Dot Pattern (WT-SDP), facilitating the transformation of raw vibration signals into advanced and discriminative graphical representations. Thereafter, the Wasserstein distance in conjunction with a gradient penalty mechanism is introduced through the WCGAN, thereby ensuring higher-quality generated samples and improved stability during model training. Experimental results validate that the proposed approach yields accelerated convergence and superior performance in sample generation. The augmented data significantly bolsters the generalization ability and predictive accuracy of fault diagnosis models trained on small datasets, with notable gains achieved in deep architectures (CNNs, LSTMs). The research substantiates that this technique helps overcome overfitting, enhances feature representation capacity, and ensures consistently high identification accuracy even in complex working environments. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

19 pages, 1007 KB  
Review
Machine Learning-Powered Vision for Robotic Inspection in Manufacturing: A Review
by David Yevgeniy Patrashko and Vladimir Gurau
Sensors 2026, 26(3), 788; https://doi.org/10.3390/s26030788 - 24 Jan 2026
Viewed by 347
Abstract
Machine learning (ML)-powered vision for robotic inspection has accelerated with smart manufacturing, enabling automated defect detection and classification and real-time process optimization. This review provides insight into the current landscape and state-of-the-art practices in smart manufacturing quality control (QC). More than 50 studies [...] Read more.
Machine learning (ML)-powered vision for robotic inspection has accelerated with smart manufacturing, enabling automated defect detection and classification and real-time process optimization. This review provides insight into the current landscape and state-of-the-art practices in smart manufacturing quality control (QC). More than 50 studies spanning across automotive, aerospace, assembly, and general manufacturing sectors demonstrate that ML-powered vision is technically viable for robotic inspection in manufacturing. The accuracy of defect detection and classification frequently exceeds 95%, with some vision systems achieving 98–100% accuracy in controlled environments. The vision systems use predominantly self-designed convolutional neural network (CNN) architectures, YOLO variants, or traditional ML vision models. However, 77% of implementations remain at the prototype or pilot scale, revealing systematic deployment barriers. A discussion is provided to address the specifics of the vision systems and the challenges that these technologies continue to face. Finally, recommendations for future directions in ML-powered vision for robotic inspection in manufacturing are provided. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Graphical abstract

47 pages, 2196 KB  
Systematic Review
Data-Driven Load Forecasting in Microgrids: Integrating External Factors for Efficient Control and Decision-Making
by Kevin David Martinez-Zapata, Daniel Ospina-Acero, Jhon James Granada-Torres, Nicolás Muñoz-Galeano, Natalia Gaviria-Gómez, Juan Felipe Botero-Vega and Sergio Armando Gutiérrez-Betancur
Energies 2026, 19(2), 555; https://doi.org/10.3390/en19020555 - 22 Jan 2026
Viewed by 104
Abstract
Accurate load forecasting is essential for optimizing microgrid and smart grid operations, thereby supporting Energy Management Systems (EMSs). Load forecasting also plays a key role in integrating renewable energy, ensuring grid stability, and facilitating decision-making. In this regard, we present a comprehensive literature [...] Read more.
Accurate load forecasting is essential for optimizing microgrid and smart grid operations, thereby supporting Energy Management Systems (EMSs). Load forecasting also plays a key role in integrating renewable energy, ensuring grid stability, and facilitating decision-making. In this regard, we present a comprehensive literature review that combines both bibliometric analysis and critical literature synthesis to evaluate state-of-the-art forecasting techniques. Based on a screened corpus of over 200 scientific publications from 2015 to 2024, our analysis reveals a significant shift in the field: AI-based approaches, including Machine Learning (ML) and Deep Learning (DL), represent more than 55% of the analyzed literature, overtaking traditional statistical models. The bibliometric results highlight a 300% increase in publications focusing on ML-based models (e.g., SVM, CNN, LSTM) over the years. Furthermore, approximately 70% of the total reviewed works use at least one exogenous variable, such as weather variables, socioeconomic indicators, and cultural behavior. These findings reflect the transition from traditional statistical models to more flexible and scalable approaches. However, socioeconomic and cultural variables remain underutilized in the literature, particularly for long-term planning. Despite the progress load forecasting processes have made in recent years, thanks to advanced modeling, a few hurdles remain to realizing their full potential in modern microgrids. Thus, we argue that future research should focus on three key areas: (i) scalable real-time adaptive models, including computational complexity characterization, (ii) standardization in data collection for seamless integration of exogenous variables, and (iii) real-world application of forecasting models in decision-making that supports EMSs. Progress in these areas may enhance grid stability, optimize resource allocation, and accelerate the transition to sustainable energy systems. Full article
(This article belongs to the Section A1: Smart Grids and Microgrids)
Show Figures

Figure 1

26 pages, 9979 KB  
Article
An Intelligent Multi-Port Temperature Control Scheme with Open-Circuit Fault Diagnosis for Aluminum Heating Systems
by Song Xu, Yiqi Rui, Lijuan Wang, Pengqiang Nie, Wei Jiang, Linfeng Sun and Seiji Hashimoto
Processes 2026, 14(2), 362; https://doi.org/10.3390/pr14020362 - 20 Jan 2026
Viewed by 151
Abstract
Industrial aluminum-block heating processes exhibit nonlinear dynamics, substantial time delays, and stringent requirements for fault detection and diagnosis, especially in semiconductor manufacturing and other high-precision electronic processes, where slight temperature deviations can accelerate device degradation or even cause catastrophic failures. To address these [...] Read more.
Industrial aluminum-block heating processes exhibit nonlinear dynamics, substantial time delays, and stringent requirements for fault detection and diagnosis, especially in semiconductor manufacturing and other high-precision electronic processes, where slight temperature deviations can accelerate device degradation or even cause catastrophic failures. To address these challenges, this study presents a digital twin-based intelligent heating platform for aluminum blocks with a dual-artificial-intelligence framework (dual-AI) for control and diagnosis, which is applicable to multi-port aluminum-block heating systems. The system enables real-time observation and simulation of high-temperature operational conditions via virtual-real interaction. The platform precisely regulates a nonlinear temperature control system with a prolonged time delay by integrating a conventional proportional–integral–derivative (PID) controller with a Levenberg–Marquardt-optimized backpropagation (LM-optimized BP) neural network. Simultaneously, a relay is employed to sever the connection to the heater, thereby simulating an open-circuit fault. Throughout this procedure, sensor data are gathered simultaneously, facilitating the creation of a spatiotemporal time-series dataset under both normal and fault conditions. A one-dimensional convolutional neural network (1D-CNN) is trained to attain high-accuracy fault detection and localization. PID+LM-BP achieves a response time of about 200 s in simulation. In the 100 °C to 105 °C step experiment, it reaches a settling time of 6 min with a 3 °C overshoot. Fault detection uses a 0.38 °C threshold defined based on the absolute minute-to-minute change of the 1-min mean temperature. Full article
Show Figures

Figure 1

16 pages, 998 KB  
Article
Architecture Design of a Convolutional Neural Network Accelerator for Heterogeneous Computing Based on a Fused Systolic Array
by Yang Zong, Zhenhao Ma, Jian Ren, Yu Cao, Meng Li and Bin Liu
Sensors 2026, 26(2), 628; https://doi.org/10.3390/s26020628 - 16 Jan 2026
Viewed by 258
Abstract
Convolutional Neural Networks (CNNs) generally suffer from excessive computational overhead, high resource consumption, and complex network structures, which severely restrict the deployment on microprocessor chips. Existing related accelerators only have an energy efficiency ratio of 2.32–6.5925 GOPs/W, making it difficult to meet the [...] Read more.
Convolutional Neural Networks (CNNs) generally suffer from excessive computational overhead, high resource consumption, and complex network structures, which severely restrict the deployment on microprocessor chips. Existing related accelerators only have an energy efficiency ratio of 2.32–6.5925 GOPs/W, making it difficult to meet the low-power requirements of embedded application scenarios. To address these issues, this paper proposes a low-power and high-energy-efficiency CNN accelerator architecture based on a central processing unit (CPU) and an Application-Specific Integrated Circuit (ASIC) heterogeneous computing architecture, adopting an operator-fused systolic array algorithm with the YOLOv5n target detection network as the application benchmark. It integrates a 2D systolic array with Conv-BN fusion technology to achieve deep operator fusion of convolution, batch normalization and activation functions; optimizes the RISC-V core to reduce resource usage; and adopts a locking mechanism and a prefetching strategy for the asynchronous platform to ensure operational stability. Experiments on the Nexys Video development board show that the architecture achieves 20.6 GFLOPs of computational performance, 1.96 W of power consumption, and 10.46 GOPs/W of energy efficiency ratio, which is 58–350% higher than existing mainstream accelerators, thus demonstrating excellent potential for embedded deployment. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

19 pages, 1607 KB  
Article
Real-Time Bird Audio Detection with a CNN-RNN Model on a SoC-FPGA
by Rodrigo Lopes da Silva, Gustavo Jacinto, Mário Véstias and Rui Policarpo Duarte
Electronics 2026, 15(2), 354; https://doi.org/10.3390/electronics15020354 - 13 Jan 2026
Viewed by 290
Abstract
Monitoring wildlife has become increasingly important for understanding the evolution of species and ecosystem health. Acoustic monitoring offers several advantages over video-based approaches, enabling continuous 24/7 observation and robust detection under challenging environmental conditions. Deep learning models have demonstrated strong performance in audio [...] Read more.
Monitoring wildlife has become increasingly important for understanding the evolution of species and ecosystem health. Acoustic monitoring offers several advantages over video-based approaches, enabling continuous 24/7 observation and robust detection under challenging environmental conditions. Deep learning models have demonstrated strong performance in audio classification. However, their computational complexity poses significant challenges for deployment on low-power embedded platforms. This paper presents a low-power embedded system for real-time bird audio detection. A hybrid CNN–RNN architecture is adopted, redesigned, and quantized to significantly reduce model complexity while preserving classification accuracy. To support efficient execution, a custom hardware accelerator was developed and integrated into a Zynq UltraScale+ ZU3CG FPGA. The proposed system achieves an accuracy of 87.4%, processes up to 5 audio samples per second, and operates at only 1.4 W, demonstrating its suitability for autonomous, energy-efficient wildlife monitoring applications. Full article
Show Figures

Figure 1

35 pages, 1875 KB  
Review
FPGA-Accelerated ECG Analysis: Narrative Review of Signal Processing, ML/DL Models, and Design Optimizations
by Laura-Ioana Mihăilă, Claudia-Georgiana Barbura, Paul Faragó, Sorin Hintea, Botond Sandor Kirei and Albert Fazakas
Electronics 2026, 15(2), 301; https://doi.org/10.3390/electronics15020301 - 9 Jan 2026
Viewed by 325
Abstract
Recent advances in deep learning have had a significant impact on biomedical applications, driving precise actions in automated diagnostic processes. However, integrating neural networks into medical devices requires meeting strict requirements regarding computing power, energy efficiency, reconfigurability, and latency, essential conditions for real-time [...] Read more.
Recent advances in deep learning have had a significant impact on biomedical applications, driving precise actions in automated diagnostic processes. However, integrating neural networks into medical devices requires meeting strict requirements regarding computing power, energy efficiency, reconfigurability, and latency, essential conditions for real-time inference. Field-Programmable Gate Array (FPGA) architectures provide a high level of flexibility, performance, and parallel execution, thus making them a suitable option for the real-world implementation of machine learning (ML) and deep learning (DL) models in systems dedicated to the analysis of physiological signals. This paper presents a review of intelligent algorithms for electrocardiogram (ECG) signal classification, including Support Vector Machines (SVMs), Artificial Neural Networks (ANNs), Recurrent Neural Networks (RNNs), Long Short-Term Memory Networks (LSTMs), and Convolutional Neural Networks (CNNs), which have been implemented on FPGA platforms. A comparative evaluation of the performances of these hardware-accelerated solutions is provided, focusing on their classification accuracy. At the same time, the FPGA families used are analyzed, along with the reported performances in terms of operating frequency, power consumption, and latency, as well as the optimization strategies applied in the design of deep learning hardware accelerators. The conclusions emphasize the popularity and efficiency of CNN architectures in the context of ECG signal classification. The study aims to offer a current overview and to support specialists in the field of FPGA design and biomedical engineering in the development of accelerators dedicated to physiological signals analysis. Full article
(This article belongs to the Special Issue Emerging Biomedical Electronics)
Show Figures

Figure 1

30 pages, 12301 KB  
Article
Deep Learning 1D-CNN-Based Ground Contact Detection in Sprint Acceleration Using Inertial Measurement Units
by Felix Friedl, Thorben Menrad and Jürgen Edelmann-Nusser
Sensors 2026, 26(1), 342; https://doi.org/10.3390/s26010342 - 5 Jan 2026
Viewed by 374
Abstract
Background: Ground contact (GC) detection is essential for sprint performance analysis. Inertial measurement units (IMUs) enable field-based assessment, but their reliability during sprint acceleration remains limited when using heuristic and recently used machine learning algorithms. This study introduces a deep learning one-dimensional convolutional [...] Read more.
Background: Ground contact (GC) detection is essential for sprint performance analysis. Inertial measurement units (IMUs) enable field-based assessment, but their reliability during sprint acceleration remains limited when using heuristic and recently used machine learning algorithms. This study introduces a deep learning one-dimensional convolutional neural network (1D-CNN) to improve GC event and GC times detection in sprint acceleration. Methods: Twelve sprint-trained athletes performed 60 m sprints while bilateral shank-mounted IMUs (1125 Hz) and synchronized high-speed video (250 Hz) captured the first 15 m. Video-derived GC events served as reference labels for model training, validation, and testing, using resultant acceleration and angular velocity as model inputs. Results: The optimized model (18 inception blocks, window = 100, stride = 15) achieved mean Hausdorff distances ≤ 6 ms and 100% precision and recall for both validation and test datasets (Rand Index ≥ 0.977). Agreement with video references was excellent (bias < 1 ms, limits of agreement ± 15 ms, r > 0.90, p < 0.001). Conclusions: The 1D-CNN surpassed heuristic and prior machine learning approaches in the sprint acceleration phase, offering robust, near-perfect GC detection. These findings highlight the promise of deep learning-based time-series models for reliable, real-world biomechanical monitoring in sprint acceleration tasks. Full article
(This article belongs to the Special Issue Inertial Sensing System for Motion Monitoring)
Show Figures

Figure 1

19 pages, 1786 KB  
Article
A Machine Learning-Driven Framework for Real-Time Lithology Identification and Drilling Parameter Optimization
by Qingshan Liu, Dengyue Li, Shuo Liu, Hefeng Liang, Yuchen Zhou, Conghui Zhao, Kun Liu, Gang Hui, Feng Ni, Peng Du and Siwen Wang
Processes 2026, 14(1), 156; https://doi.org/10.3390/pr14010156 - 2 Jan 2026
Viewed by 576
Abstract
Conventional drilling parameter optimization, heavily reliant on lagging lithology data from periodic mud logging, suffers from significant delays between formation change detection and parameter adjustment. This latency often leads to reduced Rate of Penetration (ROP), accelerated tool wear, and increased risk of drilling [...] Read more.
Conventional drilling parameter optimization, heavily reliant on lagging lithology data from periodic mud logging, suffers from significant delays between formation change detection and parameter adjustment. This latency often leads to reduced Rate of Penetration (ROP), accelerated tool wear, and increased risk of drilling complications. To address this, this work introduces a closed-loop machine learning framework for real-time lithology identification and autonomous parameter optimization. Its core is a hybrid deep learning model (1D-CNN-LSTM) that establishes a direct mapping from surface drilling parameters, Weight on Bit (WOB), Rotary Speed (RPM), Torque, ROP, to formation lithology, deliberately excluding dependency on expensive Logging-While-Drilling (LWD) tools to ensure cost-effective and broad applicability. Upon lithology change detection, the system retrieves the historically optimal Mechanical Specific Energy (MSE) value for the identified rock type and solves an inverse MSE model to compute optimal WOB and RPM setpoints within operational constraints. Field validation in a comparative trial demonstrated the framework’s efficacy: the test well achieved a 17.4% increase in ROP, a 37.8% reduction in Non-Productive Time, and an 87.5% decrease in stuck pipe incidents compared to an offset well drilled conventionally. Full article
Show Figures

Figure 1

15 pages, 659 KB  
Article
Context-Aware Road Event Detection Using Hybrid CNN–BiLSTM Networks
by Abiel Aguilar-González and Alejandro Medina Santiago
Vehicles 2026, 8(1), 4; https://doi.org/10.3390/vehicles8010004 - 2 Jan 2026
Viewed by 302
Abstract
Road anomaly detection is essential for intelligent transportation systems and road maintenance. This work presents a MATLAB-native hybrid Convolutional Neural Network–Bidirectional Long Short-Term Memory (CNN–BiLSTM) framework for context-aware road event detection using multiaxial acceleration and vibration signals. The proposed architecture integrates short-term feature [...] Read more.
Road anomaly detection is essential for intelligent transportation systems and road maintenance. This work presents a MATLAB-native hybrid Convolutional Neural Network–Bidirectional Long Short-Term Memory (CNN–BiLSTM) framework for context-aware road event detection using multiaxial acceleration and vibration signals. The proposed architecture integrates short-term feature extraction via one-dimensional convolutional layers with bidirectional LSTM-based temporal modeling, enabling simultaneous capture of instantaneous signal morphology and long-range dependencies across driving trajectories. Multiaxial data were acquired at 50 Hz using an AQ-1 On-Board Diagnostics II (OBDII) Data Logger during urban and suburban routes in San Andrés Cholula, Puebla, Mexico. Our hybrid CNN–BiLSTM model achieved a global accuracy of 95.91% and a macro F1-score of 0.959. Per-class F1-scores ranged from 0.932 (none) to 0.981 (pothole), with specificity values above 0.98 for all event categories. Qualitative analysis demonstrates that this architecture outperforms previous CNN-only vibration-based models by approximately 2–3% in macro F1-score while maintaining balanced precision and recall across all event types. Visualization of BiLSTM activations highlights enhanced interpretability and contextual discrimination, particularly for events with similar short-term signatures. Further, the proposed framework’s low computational overhead and compatibility with MATLAB Graphics Processing Unit (GPU) Coder support its feasibility for real-time embedded deployment. These results demonstrate the effectiveness and robustness of our hybrid CNN–BiLSTM approach for road anomaly detection using only acceleration and vibration signals, establishing a validated continuation of previous CNN-based research. Beyond the experimental validation, the proposed framework provides a practical foundation for real-time pavement monitoring systems and can support intelligent transportation applications such as preventive road maintenance, driver assistance, and large-scale deployment on low-power embedded platforms. Full article
Show Figures

Figure 1

29 pages, 4094 KB  
Article
Hybrid LSTM–DNN Architecture with Low-Discrepancy Hypercube Sampling for Adaptive Forecasting and Data Reliability Control in Metallurgical Information-Control Systems
by Jasur Sevinov, Barnokhon Temerbekova, Gulnora Bekimbetova, Ulugbek Mamanazarov and Bakhodir Bekimbetov
Processes 2026, 14(1), 147; https://doi.org/10.3390/pr14010147 - 1 Jan 2026
Viewed by 345
Abstract
The study focuses on the design of an intelligent information-control system (ICS) for metallurgical production, aimed at robust forecasting of technological parameters and automatic self-adaptation under noise, anomalies, and data drift. The proposed architecture integrates a hybrid LSTM–DNN model with low-discrepancy hypercube sampling [...] Read more.
The study focuses on the design of an intelligent information-control system (ICS) for metallurgical production, aimed at robust forecasting of technological parameters and automatic self-adaptation under noise, anomalies, and data drift. The proposed architecture integrates a hybrid LSTM–DNN model with low-discrepancy hypercube sampling using Sobol and Halton sequences to ensure uniform coverage of operating conditions and the hyperparameter space. The processing pipeline includes preprocessing and temporal synchronization of measurements, a parameter identification module, anomaly detection and correction using an ε-threshold scheme, and a decision-making and control loop. In simulation scenarios modeling the dynamics of temperature, pressure, level, and flow (1 min sampling interval, injected anomalies, and measurement noise), the hybrid model outperformed GRU and CNN architectures: a determination coefficient of R2 > 0.92 was achieved for key indicators, MAE and RMSE improved by 7–15%, and the proportion of unreliable measurements after correction decreased to <2% (compared with 8–12% without correction). The experiments also demonstrated accelerated adaptation during regime changes. The scientific novelty lies in combining recurrent memory and deep nonlinear approximation with deterministic experimental design in the hypercube of states and hyperparameters, enabling reproducible self-adaptation of the ICS and increased noise robustness without upgrading the measurement hardware. Modern metallurgical information-control systems operate under non-stationary regimes and limited measurement reliability, which reduces the robustness of conventional forecasting and decision-support approaches. To address this issue, a hybrid LSTM–DNN architecture combined with low-discrepancy hypercube probing and anomaly-aware data correction is proposed. The proposed approach is distinguished by the integration of hybrid neural forecasting, deterministic hypercube-based adaptation, and anomaly-aware data correction within a unified information-control loop for non-stationary industrial processes. Full article
Show Figures

Figure 1

19 pages, 3937 KB  
Article
Forecasting Daily Ambient PM2.5 Concentrations in Qingdao City Using Deep Learning and Hybrid Interpretable Models and Analysis of Driving Factors Using SHAP
by Zhenfang He, Qingchun Guo, Zuhan Zhang, Genyue Feng, Shuaisen Qiao and Zhaosheng Wang
Toxics 2026, 14(1), 44; https://doi.org/10.3390/toxics14010044 - 30 Dec 2025
Cited by 1 | Viewed by 458
Abstract
With the acceleration of urbanization in China, air pollution is becoming increasingly serious, especially PM2.5 pollution, which poses a significant threat to public health. The study employed different deep learning models, including recurrent neural network (RNN), artificial neural network (ANN), convolutional Neural [...] Read more.
With the acceleration of urbanization in China, air pollution is becoming increasingly serious, especially PM2.5 pollution, which poses a significant threat to public health. The study employed different deep learning models, including recurrent neural network (RNN), artificial neural network (ANN), convolutional Neural Network (CNN), bidirectional Long Short-Term Memory (BiLSTM), Transformer, and novel hybrid interpretable CNN–BiLSTM–Transformer architectures for forecasting daily PM2.5 concentrations on the integrated dataset. The dataset of meteorological factors and atmospheric pollutants in Qingdao City was used as input features for the model. Among the models tested, the hybrid CNN–BiLSTM–Transformer model achieved the highest prediction accuracy by extracting local features, capturing temporal dependencies in both directions, and enhancing global pattern and key information, with low root Mean Square Error (RMSE) (5.4236 μg/m3), low mean absolute error (MAE) (4.0220 μg/m3), low mean absolute percentage error (MAPE) (22.7791%) and high correlation coefficient (R) (0.9743) values. Shapley additive explanations (SHAP) analysis further revealed that PM10, CO, mean atmospheric temperature, O3, and SO2 are the key influencing factors of PM2.5. This study provides a more comprehensive and multidimensional approach for predicting air pollution, and valuable insights for people’s health and policy makers. Full article
Show Figures

Figure 1

Back to TopTop