Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (139)

Search Parameters:
Keywords = quantized filtering

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 3133 KB  
Article
Adaptive Dual-Anchor Fusion Framework for Robust SOC Estimation and SOH Soft-Sensing of Retired Batteries with Heterogeneous Aging
by Hai Wang, Rui Liu, Yupeng Guo, Yijun Liu, Jiawei Chen, Yan Jiang and Jianying Li
Batteries 2026, 12(2), 49; https://doi.org/10.3390/batteries12020049 - 1 Feb 2026
Viewed by 38
Abstract
Reliable state estimation is critical for the safe operation of second-life battery systems but is severely hindered by significant parameter heterogeneity arising from diverse historical aging conditions. Traditional static models struggle to adapt to such variability, while online identification methods are prone to [...] Read more.
Reliable state estimation is critical for the safe operation of second-life battery systems but is severely hindered by significant parameter heterogeneity arising from diverse historical aging conditions. Traditional static models struggle to adapt to such variability, while online identification methods are prone to divergence under dynamic loads. To overcome these challenges, this paper proposes a Dual-Anchor Adaptive Fusion Framework for robust State of Charge (SOC) estimation and State of Health (SOH) soft-sensing. Specifically, to establish a reliable physical baseline, an automated Dynamic Relaxation Interval Selection (DRIS) strategy is introduced. By minimizing the fitting Root Mean Square Error (RMSE), DRIS systematically extracts high-fidelity parameters to construct two “anchor models” that rigorously define the boundaries of the aging space. Subsequently, a residual-driven Bayesian fusion mechanism is developed to seamlessly interpolate between these anchors based on real-time voltage feedback, enabling the model to adapt to uncalibrated target batteries. Concurrently, a novel “SOH Soft-Sensing” capability is unlocked by interpreting the adaptive fusion weights as real-time health indicators. Experimental results demonstrate that the proposed framework achieves robust SOC estimation with an RMSE of 0.42%, significantly outperforming the standard Adaptive Extended Kalman Filter (A-EKF, RMSE 1.53%), which exhibits parameter drift under dynamic loading. Moreover, the a posteriori voltage tracking residual is compressed to ~0.085 mV, effectively approaching the hardware’s ADC quantization limit. Furthermore, SOH is inferred with a relative error of 0.84% without additional capacity tests. This work establishes a robust methodological foundation for calibration-free state estimation in heterogeneous retired battery packs. Full article
(This article belongs to the Special Issue Control, Modelling, and Management of Batteries)
15 pages, 2027 KB  
Article
Weight Standardization Fractional Binary Neural Network for Image Recognition in Edge Computing
by Chih-Lung Lin, Zi-Qing Liang, Jui-Han Lin, Chun-Chieh Lee and Kuo-Chin Fan
Electronics 2026, 15(2), 481; https://doi.org/10.3390/electronics15020481 - 22 Jan 2026
Viewed by 69
Abstract
In order to achieve better accuracy, modern models have become increasingly large, leading to an exponential increase in computational load, making it challenging to apply them to edge computing. Binary neural networks (BNNs) are models that quantize the filter weights and activations to [...] Read more.
In order to achieve better accuracy, modern models have become increasingly large, leading to an exponential increase in computational load, making it challenging to apply them to edge computing. Binary neural networks (BNNs) are models that quantize the filter weights and activations to 1-bit. These models are highly suitable for small chips like advanced RISC machines (ARMs), field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), system-on-chips (SoCs) and other edge computing devices. To design a model that is more friendly to edge computing devices, it is crucial to reduce the floating-point operations (FLOPs). Batch normalization (BN) is an essential tool for binary neural networks; however, when convolution layers are quantized to 1-bit, the floating-point computation cost of BN layers becomes significantly high. This paper aims to reduce the floating-point operations by removing the BN layers from the model and introducing the scaled weight standardization convolution (WS-Conv) method to avoid the significant accuracy drop caused by the absence of BN layers, and to enhance the model performance through a series of optimizations, adaptive gradient clipping (AGC) and knowledge distillation (KD). Specifically, our model maintains a competitive computational cost and accuracy, even without BN layers. Furthermore, by incorporating a series of training methods, the model’s accuracy on CIFAR-100 is 0.6% higher than the baseline model, fractional activation BNN (FracBNN), while the total computational load is only 46% of the baseline model. With unchanged binary operations (BOPs), the FLOPs are reduced to nearly zero, making it more suitable for embedded platforms like FPGAs or other edge computers. Full article
(This article belongs to the Special Issue Advances in Algorithm Optimization and Computational Intelligence)
Show Figures

Figure 1

22 pages, 1918 KB  
Article
Edge-VisionGuard: A Lightweight Signal-Processing and AI Framework for Driver State and Low-Visibility Hazard Detection
by Manuel J. C. S. Reis, Carlos Serôdio and Frederico Branco
Appl. Sci. 2026, 16(2), 1037; https://doi.org/10.3390/app16021037 - 20 Jan 2026
Viewed by 159
Abstract
Driving safety under low-visibility or distracted conditions remains a critical challenge for intelligent transportation systems. This paper presents Edge-VisionGuard, a lightweight framework that integrates signal processing and edge artificial intelligence to enhance real-time driver monitoring and hazard detection. The system fuses multi-modal sensor [...] Read more.
Driving safety under low-visibility or distracted conditions remains a critical challenge for intelligent transportation systems. This paper presents Edge-VisionGuard, a lightweight framework that integrates signal processing and edge artificial intelligence to enhance real-time driver monitoring and hazard detection. The system fuses multi-modal sensor data—including visual, inertial, and illumination cues—to jointly estimate driver attention and environmental visibility. A hybrid temporal–spatial feature extractor (TS-FE) is introduced, combining convolutional and B-spline reconstruction filters to improve robustness against illumination changes and sensor noise. To enable deployment on resource-constrained automotive hardware, a structured pruning and quantization pipeline is proposed. Experiments on synthetic VR-based driving scenes demonstrate that the full-precision model achieves 89.6% driver-state accuracy (F1 = 0.893) and 100% visibility accuracy, with an average inference latency of 16.5 ms. After 60% parameter reduction and short fine-tuning, the pruned model preserves 87.1% accuracy (F1 = 0.866) and <3 ms latency overhead. These results confirm that Edge-VisionGuard maintains near-baseline performance under strict computational constraints, advancing the integration of computer vision and Edge AI for next-generation safe and reliable driving assistance systems. Full article
(This article belongs to the Special Issue Advances in Virtual Reality and Vision for Driving Safety)
Show Figures

Figure 1

43 pages, 6158 KB  
Article
A Multi-Fish Tracking and Behavior Modeling Framework for High-Density Cage Aquaculture
by Xinyao Xiao, Tao Liu, Shuangyan He, Peiliang Li, Yanzhen Gu, Pixue Li and Jiang Dong
Sensors 2026, 26(1), 256; https://doi.org/10.3390/s26010256 - 31 Dec 2025
Viewed by 411
Abstract
Multi-fish tracking and behavior analysis in deep-sea cages face two critical challenges: first, the homogeneity of fish appearance and low image quality render appearance-based association unreliable; second, standard linear motion models fail to capture the complex, nonlinear swimming patterns (e.g., turning) of fish, [...] Read more.
Multi-fish tracking and behavior analysis in deep-sea cages face two critical challenges: first, the homogeneity of fish appearance and low image quality render appearance-based association unreliable; second, standard linear motion models fail to capture the complex, nonlinear swimming patterns (e.g., turning) of fish, leading to frequent identity switches and fragmented trajectories. To address these challenges, we propose SOD-SORT, which integrates a Constant Turn-Rate and Velocity (CTRV) motion model within an Extended Kalman Filter (EKF) framework into DeepOCSORT, a recent observation-centric tracker. Through systematic Bayesian optimization of the EKF process noise (Q), observation noise (R), and ReID weighting parameters, we achieve harmonious integration of advanced motion modeling with appearance features. Evaluations on the DeepBlueI validation set show that SOD-SORT attains IDF1 = 0.829 and reduces identity switches by 13% (93 vs. 107) compared to the DeepOCSORT baseline, while maintaining comparable MOTA (0.737). Controlled ablation studies reveal that naive integration of CTRV-EKF with default parameters degrades performance substantially (IDs: 172 vs. 107 baseline), but careful parameter optimization resolves this motion-appearance conflict. Furthermore, we introduce a statistical quantization method that converts variable-length trajectories into fixed-length feature vectors, enabling effective unsupervised classification of normal and abnormal swimming behaviors in both the Fish4Knowledge coral reef dataset and real-world Deep Blue I cage videos. The proposed approach demonstrates that principled integration of advanced motion models with appearance cues, combined with high-quality continuous trajectories, can support reliable behavior modeling for aquaculture monitoring applications. Full article
Show Figures

Figure 1

16 pages, 1543 KB  
Article
High Precision Speech Keyword Spotting Based on Binary Deep Neural Network in FPGA
by Ang Zhang, Jialiang Shi, Hui Qian and Junjie Wang
Entropy 2025, 27(11), 1143; https://doi.org/10.3390/e27111143 - 7 Nov 2025
Viewed by 815
Abstract
Deep Neural Networks (DNNs) are the primary approach for enhancing the real-time performance and accuracy of Keyword Spotting (KWS) systems in speech processing. However, the exceptional performance of DNN-KWS faces significant challenges related to computational intensity and storage requirements, severely limiting its deployment [...] Read more.
Deep Neural Networks (DNNs) are the primary approach for enhancing the real-time performance and accuracy of Keyword Spotting (KWS) systems in speech processing. However, the exceptional performance of DNN-KWS faces significant challenges related to computational intensity and storage requirements, severely limiting its deployment on resource-constrained Internet of Things (IoT) edge devices. Researchers have sought to mitigate these demands by employing Binary Neural Networks (BNNs) through single-bit quantization, albeit at the cost of reduced recognition accuracy. From an information-theoretic perspective, binarization, as a form of lossy compression, increases the uncertainty (Shannon entropy) in the model’s output, contributing to the accuracy degradation. Unfortunately, even a slight accuracy degradation can trigger frequent false wake-ups in the KWS module, leading to substantial energy consumption in IoT devices. To address this issue, this paper proposes a novel Probability Smoothing Enhanced Binarized Neural Network (PSE-BNN) model that achieves a balance between computational complexity and accuracy, enabling efficient deployment on an FPGA platform. The PSE-BNN comprises two components: a preliminary recognition extraction module for extracting initial KWS features, and a result recognition module that leverages temporal correlation to denoise and enhance the quantized model’s features, thereby improving overall recognition accuracy by reducing the conditional entropy of the output distribution. Experimental results demonstrate that the PSE-BNN achieves a recognition accuracy of 97.29% on the Google Speech Commands Dataset (GSCD). Furthermore, deployed on the Xilinx VC707 hardware platform, the PSE-BNN utilizes only 1939 Look-Up Tables (LUTs), 832 Flip-Flops (FFs), and 234 Kb of storage. Compared to state-of-the-art BNN-KWS designs, the proposed method improves accuracy by 1.93% while reducing hardware resource usage by nearly 65%. The smoothing filter effectively suppresses noise-induced entropy, enhancing the signal-to-noise ratio (SNR) in the information transmission path. This demonstrates the significant potential of the PSE-BNN-FPGA design for resource-constrained edge IoT devices. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

19 pages, 20616 KB  
Article
Toward Trustworthy On-Device AI: A Quantization-Robust Parameterized Hybrid Neural Filtering Framework
by Sangwoo Hong, Seung-Wook Kim, Seunghyun Moon and Seowon Ji
Mathematics 2025, 13(21), 3447; https://doi.org/10.3390/math13213447 - 29 Oct 2025
Viewed by 767
Abstract
Recent advances in deep learning have led to a proliferation of AI services for the general public. Consequently, constructing trustworthy AI systems that operate on personal devices has become a crucial challenge. While on-device processing is critical for privacy-preserving and latency-sensitive applications, conventional [...] Read more.
Recent advances in deep learning have led to a proliferation of AI services for the general public. Consequently, constructing trustworthy AI systems that operate on personal devices has become a crucial challenge. While on-device processing is critical for privacy-preserving and latency-sensitive applications, conventional deep learning approaches often suffer from instability under quantization and high computational costs. Toward a trustworthy and efficient on-device solution for image processing, we present a hybrid neural filtering framework that combines the representational power of lightweight neural networks with the stability of classical filters. In our framework, the neural network predicts a low-dimensional parameter map that guides the filter’s behavior, effectively decoupling parameter estimation from the final image synthesis. This design enables a truly trustworthy AI system by operating entirely on-device, which eliminates the reliance on servers and significantly reduces computational cost. To ensure quantization robustness, we introduce a basis-decomposed parameterization, a design mathematically proven to bound reconstruction errors. Our network predicts a set of basis maps that are combined via fixed coefficients to form the final guidance. This architecture is intrinsically robust to quantization and supports runtime-adaptive precision without retraining. Experiments on depth map super-resolution validate our approach. Our framework demonstrates exceptional quantization robustness, exhibiting no performance degradation under 8-bit quantization, whereas a baseline suffers a significant 1.56 dB drop. Furthermore, our model’s significantly lower Mean Squared Error highlights its superior stability, providing a practical and mathematically grounded pathway toward trustworthy on-device AI. Full article
Show Figures

Figure 1

24 pages, 7023 KB  
Article
High-Precision Low-Speed Measurement for Permanent Magnet Synchronous Motors Using an Improved Extended State Observer
by Runze Ji, Kai Liu, Yingsong Wang and Rana Md Sohel
World Electr. Veh. J. 2025, 16(11), 595; https://doi.org/10.3390/wevj16110595 - 28 Oct 2025
Cited by 1 | Viewed by 764
Abstract
High-precision speed measurement at low speeds in PMSM drives is hindered by encoder quantization noise. This paper proposes an enhanced extended state observer (ESO)-based method to overcome limitations of conventional approaches such as direct differentiation with the low-pass filter (high noise), the phase-locked [...] Read more.
High-precision speed measurement at low speeds in PMSM drives is hindered by encoder quantization noise. This paper proposes an enhanced extended state observer (ESO)-based method to overcome limitations of conventional approaches such as direct differentiation with the low-pass filter (high noise), the phase-locked loop (PLL)-based method (limited dynamic response), and standard ESO (sensitivity to disturbance). The improved ESO incorporates reference torque feedforward and disturbance feedback, significantly suppressing noise and enhancing robustness. Simulations and experiments demonstrate that the proposed method reduces steady-state speed fluctuation by up to 42% compared to standard ESO and over 90.1% relative to differentiation-based methods, while also improving transient performance. It exhibits superior accuracy and stability across various low-speed conditions, offering a practical solution for high-performance servo applications. Full article
(This article belongs to the Section Propulsion Systems and Components)
Show Figures

Figure 1

28 pages, 1690 KB  
Article
Hardware-Aware Neural Architecture Search for Real-Time Video Processing in FPGA-Accelerated Endoscopic Imaging
by Cunguang Zhang, Rui Cui, Gang Wang, Tong Gao, Jielu Yan, Weizhi Xian, Xuekai Wei and Yi Qin
Appl. Sci. 2025, 15(20), 11200; https://doi.org/10.3390/app152011200 - 19 Oct 2025
Viewed by 1179
Abstract
Medical endoscopic video processing requires real-time execution of color component acquisition, color filter array (CFA) demosaicing, and high dynamic range (HDR) compression under low-light conditions, while adhering to strict thermal constraints within the surgical handpiece. Traditional hardware-aware neural architecture search (NAS) relies on [...] Read more.
Medical endoscopic video processing requires real-time execution of color component acquisition, color filter array (CFA) demosaicing, and high dynamic range (HDR) compression under low-light conditions, while adhering to strict thermal constraints within the surgical handpiece. Traditional hardware-aware neural architecture search (NAS) relies on fixed hardware design spaces, making it difficult to balance accuracy, power consumption, and real-time performance. A collaborative “power-accuracy” optimization method is proposed for hardware-aware NAS. Firstly, we proposed a novel hardware modeling framework by abstracting FPGA heterogeneous resources into unified cell units and establishing a power–temperature closed-loop model to ensure that the handpiece surface temperature does not exceed clinical thresholds. In this framework, we constrained the interstage latency balance in pipelines to avoid routing congestion and frequency degradation caused by deep pipelines. Then, we optimized the NAS strategy by using pipeline blocks and combined with a hardware efficiency reward function. Finally, color component acquisition, CFA demosaicing, dynamic range compression, dynamic precision quantization, and streaming architecture are integrated into our framework. Experiments demonstrate that the proposed method achieves 2.8 W power consumption at 47 °C on a Xilinx ZCU102 platform, with a 54% improvement in throughput (vs. hardware-aware NAS), providing an engineer-ready lightweight network for medical edge devices such as endoscopes. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

23 pages, 2255 KB  
Article
Design and Implementation of a YOLOv2 Accelerator on a Zynq-7000 FPGA
by Huimin Kim and Tae-Kyoung Kim
Sensors 2025, 25(20), 6359; https://doi.org/10.3390/s25206359 - 14 Oct 2025
Cited by 1 | Viewed by 1691
Abstract
You Only Look Once (YOLO) is a convolutional neural network-based object detection algorithm widely used in real-time vision applications. However, its high computational demand leads to significant power consumption and cost when deployed in graphics processing units. Field-programmable gate arrays offer a low-power [...] Read more.
You Only Look Once (YOLO) is a convolutional neural network-based object detection algorithm widely used in real-time vision applications. However, its high computational demand leads to significant power consumption and cost when deployed in graphics processing units. Field-programmable gate arrays offer a low-power alternative. However, their efficient implementation requires architecture-level optimization tailored to limited device resources. This study presents an optimized YOLOv2 accelerator for the Zynq-7000 system-on-chip (SoC). The design employs 16-bit integer quantization, a filter reuse structure, an input feature map reuse scheme using a line buffer, and tiling parameter optimization for the convolution and max pooling layers to maximize resource efficiency. In addition, a stall-based control mechanism is introduced to prevent structural hazards in the pipeline. The proposed accelerator was implemented on the Zynq-7000 SoC board, and a system-level evaluation confirmed a negligible accuracy drop of only 0.2% compared with the 32-bit floating-point baseline. Compared with previous YOLO accelerators on the same SoC, the design achieved up to 26% and 15% reductions in flip-flop and digital signal processor usage, respectively. This result demonstrates feasible deployment on XC7Z020 with DSP 57.27% and FF 16.55% utilization. Full article
(This article belongs to the Special Issue Object Detection and Recognition Based on Deep Learning)
Show Figures

Figure 1

19 pages, 4859 KB  
Article
A Dual-Mode Adaptive Bandwidth PLL for Improved Lock Performance
by Thi Viet Ha Nguyen and Cong-Kha Pham
Electronics 2025, 14(20), 4008; https://doi.org/10.3390/electronics14204008 - 13 Oct 2025
Cited by 1 | Viewed by 3246
Abstract
This paper proposed an adaptive bandwidth Phase-Locked Loop (PLL) that integrates integer-N and fractional-N switching for energy-efficient RF synthesis in IoT and mobile applications. The architecture exploits wide-bandwidth integer-N mode for rapid lock acquisition, then seamlessly transitions to narrow-bandwidth fractional-N mode for high-resolution [...] Read more.
This paper proposed an adaptive bandwidth Phase-Locked Loop (PLL) that integrates integer-N and fractional-N switching for energy-efficient RF synthesis in IoT and mobile applications. The architecture exploits wide-bandwidth integer-N mode for rapid lock acquisition, then seamlessly transitions to narrow-bandwidth fractional-N mode for high-resolution synthesis and noise optimization. The architecture features a bandwidth-reconfigurable loop filter with intelligent switching control that monitors phase error dynamics. A novel adaptive digital noise filter mitigates ΔΣ quantization noise, replacing conventional synchronous delay lines. The multi-loop structure incorporates a high-resolution digital phase detector to enhance frequency accuracy and minimize jitter across both operating modes. With 180 nm CMOS technology, the PLL consumes 13.2 mW, while achieving 119 dBc/Hz in-band phase noise and 1 psrms integrated jitter. With an operating frequency range at 2.9–3.2 GHz from a 1.8 V supply, the circuit achieves a worst case fractional spur of −62.7 dBc, which corresponds to a figure of merit (FOM) of −228.8 dB. Lock time improvements of 70% are demonstrated compared to single-mode implementations, making it suitable for high-precision, low-power wireless communication systems requiring agile frequency synthesis. Full article
Show Figures

Figure 1

20 pages, 2197 KB  
Article
Perceptual Image Hashing Fusing Zernike Moments and Saliency-Based Local Binary Patterns
by Wei Li, Tingting Wang, Yajun Liu and Kai Liu
Computers 2025, 14(9), 401; https://doi.org/10.3390/computers14090401 - 21 Sep 2025
Viewed by 1044
Abstract
This paper proposes a novel perceptual image hashing scheme that robustly combines global structural features with local texture information for image authentication. The method starts with image normalization and Gaussian filtering to ensure scale invariance and suppress noise. A saliency map is then [...] Read more.
This paper proposes a novel perceptual image hashing scheme that robustly combines global structural features with local texture information for image authentication. The method starts with image normalization and Gaussian filtering to ensure scale invariance and suppress noise. A saliency map is then generated from a color vector angle matrix using a frequency-tuned model to identify perceptually significant regions. Local Binary Pattern (LBP) features are extracted from this map to represent fine-grained textures, while rotation-invariant Zernike moments are computed to capture global geometric structures. These local and global features are quantized and concatenated into a compact binary hash. Extensive experiments on standard databases show that the proposed method outperforms state-of-the-art algorithms in both robustness against content-preserving manipulations and discriminability across different images. Quantitative evaluations based on ROC curves and AUC values confirm its superior robustness–uniqueness trade-off, demonstrating the effectiveness of the saliency-guided fusion of Zernike moments and LBP for reliable image hashing. Full article
Show Figures

Figure 1

20 pages, 42612 KB  
Article
Progressive Color Correction and Vision-Inspired Adaptive Framework for Underwater Image Enhancement
by Zhenhua Li, Wenjing Liu, Ji Wang and Yuqiang Yang
J. Mar. Sci. Eng. 2025, 13(9), 1820; https://doi.org/10.3390/jmse13091820 - 19 Sep 2025
Cited by 1 | Viewed by 990
Abstract
Underwater images frequently exhibit color distortion, detail blurring, and contrast degradation due to absorption and scattering by the underwater medium. This study proposes a progressive color correction strategy integrated with a vision-inspired image enhancement framework to address these issues. Specifically, the progressive color [...] Read more.
Underwater images frequently exhibit color distortion, detail blurring, and contrast degradation due to absorption and scattering by the underwater medium. This study proposes a progressive color correction strategy integrated with a vision-inspired image enhancement framework to address these issues. Specifically, the progressive color correction process includes adaptive color quantization-based global color correction, followed by guided filter-based local color refinement, aiming to restore accurate colors while enhancing visual perception. Within the vision-inspired enhancement framework, the color-adjusted image is first decomposed into a base layer and a detail layer, corresponding to low- and high-frequency visual information, respectively. Subsequently, detail enhancement and noise suppression are applied in the detail pathway, while global brightness correction is performed in the structural pathway. Finally, results from both pathways are fused to yield the enhanced underwater image. Extensive experiments on four datasets verify that the proposed method effectively handles the aforementioned underwater enhancement challenges and significantly outperforms state-of-the-art techniques. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Graphical abstract

27 pages, 4269 KB  
Article
Image Processing Algorithms Analysis for Roadside Wild Animal Detection
by Mindaugas Knyva, Darius Gailius, Šarūnas Kilius, Aistė Kukanauskaitė, Pranas Kuzas, Gintautas Balčiūnas, Asta Meškuotienė and Justina Dobilienė
Sensors 2025, 25(18), 5876; https://doi.org/10.3390/s25185876 - 19 Sep 2025
Cited by 1 | Viewed by 1185
Abstract
The study presents a comparative analysis of five distinct image processing methodologies for roadside wild animal detection using thermal imagery, aiming to identify an optimal approach for embedded system implementation to mitigate wildlife–vehicle collisions. The evaluated techniques included the following: bilateral filtering followed [...] Read more.
The study presents a comparative analysis of five distinct image processing methodologies for roadside wild animal detection using thermal imagery, aiming to identify an optimal approach for embedded system implementation to mitigate wildlife–vehicle collisions. The evaluated techniques included the following: bilateral filtering followed by thresholding and SIFT feature matching; Gaussian filtering combined with Canny edge detection and contour analysis; color quantization via the nearest average algorithm followed by contour identification; motion detection based on absolute inter-frame differencing, object dilation, thresholding, and contour comparison; and animal detection based on a YOLOv8n neural network. These algorithms were applied to sequential thermal images captured by a custom roadside surveillance system incorporating a thermal camera and a Raspberry Pi processing unit. Performance evaluation utilized a dataset of consecutive frames, assessing average execution time, sensitivity, specificity, and accuracy. The results revealed performance trade-offs: the motion detection method achieved the highest sensitivity (92.31%) and overall accuracy (87.50%), critical for minimizing missed detections, despite exhibiting the near lowest specificity (66.67%) and a moderate execution time (0.126 s) compared to the fastest bilateral filter approach (0.093 s) and the high-specificity Canny edge method (90.00%). Consequently, considering the paramount importance of detection reliability (sensitivity and accuracy) in this application, the motion-based methodology was selected for further development and implementation within the target embedded system framework. Subsequent testing on diverse datasets validated its general robustness while highlighting potential performance variations depending on dataset characteristics, particularly the duration of animal presence within the monitored frame. Full article
(This article belongs to the Special Issue Energy Harvesting and Machine Learning in IoT Sensors)
Show Figures

Figure 1

20 pages, 4833 KB  
Article
High-Precision Visual SLAM for Dynamic Scenes Using Semantic–Geometric Feature Filtering and NeRF Maps
by Yanjun Ma, Jiahao Lv and Jie Wei
Electronics 2025, 14(18), 3657; https://doi.org/10.3390/electronics14183657 - 15 Sep 2025
Cited by 3 | Viewed by 1851
Abstract
Dynamic environments pose significant challenges for visual SLAM, including feature ambiguity, weak textures, and map inconsistencies caused by moving objects. We present a robust SLAM framework integrating image enhancement, a mixed-precision quantized feature detection network, semantic-driven dynamic feature filtering, and NeRF-based static scene [...] Read more.
Dynamic environments pose significant challenges for visual SLAM, including feature ambiguity, weak textures, and map inconsistencies caused by moving objects. We present a robust SLAM framework integrating image enhancement, a mixed-precision quantized feature detection network, semantic-driven dynamic feature filtering, and NeRF-based static scene reconstruction. The system reliably extracts features under challenging conditions, removes dynamic points using instance segmentation combined with polar geometric constraints, and reconstructs static scenes with enhanced structural fidelity. Extensive experiments on TUM RGB-D, BONN RGB-D, and a custom dataset demonstrate notable improvements in the RMSE, mean, median, and standard deviation. Compared with ORB-SLAM3, our method achieves an average RMSE reduction of 93.4%, demonstrating substantial improvement, and relative to other state-of-the-art dynamic SLAM systems, it improves the average RMSE by 49.6% on TUM and 23.1% on BONN, highlighting its high accuracy, robustness, and adaptability in complex and highly dynamic environments. Full article
(This article belongs to the Special Issue 3D Computer Vision and 3D Reconstruction)
Show Figures

Figure 1

17 pages, 2363 KB  
Article
Low-Power CT-DS ADC for High-Sensitivity Automotive-Grade Sub-1 GHz Receiver
by Ying Li, Wenyuan Li and Qingsheng Hu
Electronics 2025, 14(18), 3606; https://doi.org/10.3390/electronics14183606 - 11 Sep 2025
Viewed by 932
Abstract
This paper presents a low-power continuous-time delta-sigma (CT-DS) analog-to-digital converter (ADC) for use in high-sensitivity automotive-grade sub-1 GHz receivers in emerging wireless sensors network applications. The proposed ADC employs a third-order Cascade of Integrators FeedForward and Feedback (CIFF-B) loop filter operating at a [...] Read more.
This paper presents a low-power continuous-time delta-sigma (CT-DS) analog-to-digital converter (ADC) for use in high-sensitivity automotive-grade sub-1 GHz receivers in emerging wireless sensors network applications. The proposed ADC employs a third-order Cascade of Integrators FeedForward and Feedback (CIFF-B) loop filter operating at a sampling frequency of 150 MHz to achieve high energy efficiency and robust noise shaping. A low-noise phase-locked loop (PLL) is integrated to provide high-precision clock signals. The loop filter combines active-RC and GmC integrators with the source degeneration technique to optimize power consumption and linearity. To minimize complexity and enhance stability, a 1-bit quantizer with isolation switches and return-to-zero (RZ) digital-to-analog converters (DACs) are used in the modulator. With a 500 kHz bandwidth, the sensitivity of the receiver is −105.5 dBm. Fabricated in a 180 nm standard CMOS process, the prototype achieves a peak signal-to-noise ratio (SNR) of 76.1 dB and a signal-to-noise and distortion ratio (SNDR) of 75.3 dB, resulting in a Schreier figure of merit (FoM) of 160.7 dB based on SNDR, while consuming only 0.8 mA from a 1.8 V supply. Full article
Show Figures

Figure 1

Back to TopTop