Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (102)

Search Parameters:
Keywords = foggy weather

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 7628 KB  
Article
Bio-Inspired Ghost Imaging: A Self-Attention Approach for Scattering-Robust Remote Sensing
by Rehmat Iqbal, Yanfeng Song, Kiran Zahoor, Loulou Deng, Dapeng Tian, Yutang Wang, Peng Wang and Jie Cao
Biomimetics 2026, 11(1), 53; https://doi.org/10.3390/biomimetics11010053 - 8 Jan 2026
Viewed by 326
Abstract
Ghost imaging (GI) offers a robust framework for remote sensing under degraded visibility conditions. However, atmospheric scattering in phenomena such as fog introduces significant noise and signal attenuation, thereby limiting its efficacy. Inspired by the selective attention mechanisms of biological visual systems, this [...] Read more.
Ghost imaging (GI) offers a robust framework for remote sensing under degraded visibility conditions. However, atmospheric scattering in phenomena such as fog introduces significant noise and signal attenuation, thereby limiting its efficacy. Inspired by the selective attention mechanisms of biological visual systems, this study introduces a novel deep learning (DL) architecture that embeds a self-attention mechanism to enhance GI reconstruction in foggy environments. The proposed approach mimics neural processes by modeling both local and global dependencies within one-dimensional bucket measurements, enabling superior recovery of image details and structural coherence even at reduced sampling rates. Extensive simulations on the Modified National Institute of Standards and Technology (MNIST) and a custom Human-Horse dataset demonstrate that our bio-inspired model outperforms conventional GI and convolutional neural network-based methods. Specifically, it achieves Peak Signal-to-Noise Ratio (PSNR) values between 24.5–25.5 dB/m and Structural Similarity Index Measure (SSIM) values of approximately 0.8 under high scattering conditions (β  3.0 dB/m) and moderate sampling ratios (N  50%). A comparative analysis confirms the critical role of the self-attention module, providing high-quality image reconstruction over baseline techniques. The model also maintains computational efficiency, with inference times under 0.12 s, supporting real-time applications. This work establishes a new benchmark for bio-inspired computational imaging, with significant potential for environmental monitoring, autonomous navigation and defense systems operating in adverse weather. Full article
(This article belongs to the Special Issue Bionic Vision Applications and Validation)
Show Figures

Figure 1

25 pages, 18950 KB  
Article
Robust Object Detection for UAVs in Foggy Environments with Spatial-Edge Fusion and Dynamic Task Alignment
by Qing Dong, Tianxin Han, Gang Wu, Lina Sun and Yuchang Lu
Remote Sens. 2026, 18(1), 169; https://doi.org/10.3390/rs18010169 - 5 Jan 2026
Viewed by 287
Abstract
Robust scene perception in adverse environmental conditions, particularly under dense fog, presents a persistent and fundamental challenge to the reliability of object detection systems. To address this critical challenge, we propose Fog-UAVNet, a novel lightweight deep-learning architecture designed to enhance unmanned aerial vehicle [...] Read more.
Robust scene perception in adverse environmental conditions, particularly under dense fog, presents a persistent and fundamental challenge to the reliability of object detection systems. To address this critical challenge, we propose Fog-UAVNet, a novel lightweight deep-learning architecture designed to enhance unmanned aerial vehicle (UAV) object detection performance in foggy environments. Fog-UAVNet incorporates three key innovations: the Spatial-Edge Feature Fusion Module (SEFFM), which enhances feature extraction by effectively integrating edge and spatial information, the Frequency-Adaptive Dilated Convolution (FADC), which dynamically adjusts to fog density variations and further enhances feature representation under adverse conditions, and the Dynamic Task-Aligned Head (DTAH), which dynamically aligns localization and classification tasks and thus improves overall model performance. To evaluate the effectiveness of our approach, we independently constructed a real-world foggy dataset and synthesized the VisDrone-fog dataset using an atmospheric scattering model. Extensive experiments on multiple challenging datasets demonstrate that Fog-UAVNet consistently outperforms state-of-the-art methods in both detection accuracy and computational efficiency, highlighting its potential for enhancing robust visual perception under adverse weather. Full article
(This article belongs to the Special Issue Efficient Object Detection Based on Remote Sensing Images)
Show Figures

Figure 1

25 pages, 7265 KB  
Article
Hazy Aware-YOLO: An Enhanced UAV Object Detection Model for Foggy Weather via Wavelet Convolution and Attention-Based Optimization
by Lin Wang, Binjie Zhang, Qinyan Tan, Dejun Duan and Yulei Wang
Automation 2026, 7(1), 3; https://doi.org/10.3390/automation7010003 - 24 Dec 2025
Viewed by 426
Abstract
Foggy weather critically undermines the autonomous perception capabilities of unmanned aerial vehicles (UAVs) by degrading image contrast, obscuring object structures, and impairing small target recognition, which often leads to significant performance deterioration in conventional detection models. To address these challenges in automated UAV [...] Read more.
Foggy weather critically undermines the autonomous perception capabilities of unmanned aerial vehicles (UAVs) by degrading image contrast, obscuring object structures, and impairing small target recognition, which often leads to significant performance deterioration in conventional detection models. To address these challenges in automated UAV operations, this study introduces Hazy Aware-YOLO (HA-YOLO), an enhanced detection framework based on YOLO11, specifically engineered for reliable object detection under low-visibility conditions. The proposed model incorporates wavelet convolution to suppress haze-induced noise and enhance multi-scale feature fusion. Furthermore, a novel Context-Enhanced Hybrid Self-Attention (CEHSA) module is developed, which sequentially combines channel attention aggregation (CAA) with multi-head self-attention (MHSA) to capture local contextual cues while mitigating global noise interference. Extensive evaluations demonstrate that HA-YOLO and its variants achieve superior detection precision and robustness compared to the baseline YOLO11, while maintaining model efficacy. In particular, when benchmarked against state-of-the-art detectors, HA-YOLO exhibits a better balance between detection accuracy and complexity, offering a practical and efficient solution for real-world autonomous UAV perception tasks in adverse weather. Full article
(This article belongs to the Section Smart Transportation and Autonomous Vehicles)
Show Figures

Figure 1

17 pages, 5885 KB  
Article
Real-Time Detection of Dynamic Targets in Dynamic Scattering Media
by Ying Jin, Wenbo Zhao, Siyu Guo, Jiakuan Zhang, Lixun Ye, Chen Nie, Yiyang Zhu, Hongfei Yu, Cangtao Zhou and Wanjun Dai
Photonics 2025, 12(12), 1242; https://doi.org/10.3390/photonics12121242 - 18 Dec 2025
Viewed by 331
Abstract
In dynamic scattering media (such as rain, fog, biological tissues, etc.) environments, scattered light causes severe degradation of target images, directly leading to a sudden drop in the detection confidence of target detection models and a significant increase in the rate of missed [...] Read more.
In dynamic scattering media (such as rain, fog, biological tissues, etc.) environments, scattered light causes severe degradation of target images, directly leading to a sudden drop in the detection confidence of target detection models and a significant increase in the rate of missed detections. This is a key challenge in the intersection of optical imaging and computer vision. Aiming to address the problems of poor generalization and slow reasoning speed of existing schemes, we construct an end-to-end framework of multi-stage preprocessing, customized network reconstruction, and object detection based on the existing network framework. First, we optimize the original degraded image through preprocessing to suppress scattered noise from the source and retain the key features for detection. Relying on a lightweight and customized network (with only 8.20 M of parameters), high-fidelity reconstruction is achieved to further reduce scattering interference and ultimately complete target detection. The reasoning speed of this framework is significantly better than that of the existing network. On RTX4060, the network’s reasoning ability reaches 147.93 frames per second. After reconstruction, the average confidence level of dynamic object detection is 0.95 with a maximum of 0.99, effectively solving the problem of detection failure in dynamic scattering media. It can provide technical support for scenarios such as unmanned aerial vehicle (UAV) monitoring in foggy weather, biomedical target recognition, and low-altitude security. Full article
Show Figures

Figure 1

15 pages, 5199 KB  
Article
YOLO-DER: A Dynamic Enhancement Routing Framework for Adverse Weather Vehicle Detection
by Ruilai Gao, Mohd Hasbullah Omar and Massudi Mahmuddin
Electronics 2025, 14(24), 4851; https://doi.org/10.3390/electronics14244851 - 10 Dec 2025
Viewed by 488
Abstract
Deep learning-based vehicle detection methods have achieved impressive performance in favorable conditions. However, their effectiveness declines significantly in adverse weather scenarios, such as fog, rain, and low-illumination environments, due to severe image degradation. Existing approaches often fail to achieve efficient integration between image [...] Read more.
Deep learning-based vehicle detection methods have achieved impressive performance in favorable conditions. However, their effectiveness declines significantly in adverse weather scenarios, such as fog, rain, and low-illumination environments, due to severe image degradation. Existing approaches often fail to achieve efficient integration between image enhancement and object detection, and typically lack adaptive strategies to cope with diverse degradation patterns. To address these challenges, this paper proposes a novel end-to-end detection framework, You Only Look Once-Dynamic Enhancement Routing (YOLO-DER), which introduces a lightweight Dynamic Enhancement Routing module. This module adaptively selects the optimal enhancement strategy—such as dehazing or brightness correction—based on the degradation characteristics of the input image. It is jointly optimized with the YOLOv12 detector to achieve tight integration of enhancement and detection. Extensive experiments on BDD100K, Foggy Cityscapes, and ExDark demonstrate the superior performance of YOLO-DER, yielding mAP50 scores of 80.8%, 57.9%, and 85.6%, which translate into absolute gains of +3.8%, +2.3%, and +2.9% over YOLOv12 on the respective datasets. The results confirm its robustness and generalization across foggy, rainy, and low-light conditions, providing an efficient and scalable solution for all-weather visual perception in autonomous driving. Full article
Show Figures

Figure 1

18 pages, 1560 KB  
Article
Transmission Line Bird Species Detection and Identification Based on Double Data Enhancement and Improvement of YOLOv8s
by Tao Xue, Dingyue Cheng, Tao Chen, Rui Zhao, Zhenhao Wang and Chong Wang
Appl. Sci. 2025, 15(24), 12953; https://doi.org/10.3390/app152412953 - 9 Dec 2025
Viewed by 279
Abstract
To address the challenge of bird species detection on transmission lines, this paper proposes a detection method based on dual data enhancement and an improved YOLOv8s model. The method aims to improve the accuracy of identifying small- and medium-sized targets in bird detection [...] Read more.
To address the challenge of bird species detection on transmission lines, this paper proposes a detection method based on dual data enhancement and an improved YOLOv8s model. The method aims to improve the accuracy of identifying small- and medium-sized targets in bird detection scenes on transmission lines, while also accounting for the impact of changing weather conditions. To address these issues, a dual data enhancement strategy is introduced. The model’s generalization ability in outdoor environments is enhanced by simulating various weather conditions, including sunny, cloudy, and foggy days, as well as halo effects. Additionally, an improved Mosaic augmentation technique is proposed, which incorporates target density calculation and adaptive scale stitching. Within the improved YOLOv8s architecture, the CBAM attention mechanism is embedded in the Backbone network, and BiFPN replaces the original Neck module to facilitate bidirectional feature extraction and fusion. Experimental results demonstrate that the proposed method achieves high detection accuracy for all bird species, with an average precision rate of 94.2%, a recall rate of 89.7%, and an mAP@50 of 94.2%. The model also maintains high inference speed, demonstrating potential for real-time detection requirements. Ablation and comparative experiments validate the effectiveness of the proposed model, confirming its suitability for edge deployment and its potential as an effective solution for bird species detection and identification on transmission lines. Full article
Show Figures

Figure 1

21 pages, 34323 KB  
Article
Ship-RT-DETR: An Improved Model for Ship Plate Detection and Identification
by Chang Qin, Xiaoyu Ji, Zhiyi Mo and Jinming Mo
J. Mar. Sci. Eng. 2025, 13(11), 2205; https://doi.org/10.3390/jmse13112205 - 19 Nov 2025
Viewed by 466
Abstract
Ship License Plate Recognition (SLPR) technology serves as a fundamental technological foundation for maritime transportation management. Automated ship identification enhances both regulatory oversight and operational efficiency. However, current recognition models demonstrate significant limitations, including their inability to detect objects in complex environments and [...] Read more.
Ship License Plate Recognition (SLPR) technology serves as a fundamental technological foundation for maritime transportation management. Automated ship identification enhances both regulatory oversight and operational efficiency. However, current recognition models demonstrate significant limitations, including their inability to detect objects in complex environments and challenges in maintaining real-time performance while ensuring accuracy, thereby limiting their practical applicability. This study proposes a novel cascaded framework that integrates RT-DETR-based detection with OCR capabilities. The framework incorporates several key methodological innovations: optimizing the RT-DETR backbone through efficient partial convolutions during training to improve computational efficiency; implementing Conv3XC to modify the ResNet18-backbone BasicBlock using a triple convolutional layer configuration with an enhanced RepC3 kernel design for better feature extraction; and integrating learned position encoding (LPE) to improve the AIFI position encoding mechanism, thereby enhancing detection capabilities. After region detection, PP-OCRv3 is used for character recognition. Experimental results demonstrate the superior performance of our approach: Ship-RT-DETR achieves 96.2% detection accuracy with a 28.5% reduction in parameters and 67.3 FPS, while PP-OCRv3 achieves 91.6% recognition accuracy. Extensive environmental validation across diverse weather conditions (sunny, cloudy, rainy, and foggy) confirms the framework’s robustness, maintaining a detection accuracy above 90% even in challenging foggy conditions, with minimal performance degradation (a 7.7% decrease from optimal conditions). The system’s consistent performance across various environmental conditions (detection standard deviation: 2.84%, OCR confidence standard deviation: 0.0295) establishes a novel and robust methodology for practical SLPR applications. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

14 pages, 3128 KB  
Article
WCGNet: A Weather Codebook and Gating Fusion for Robust 3D Detection Under Adverse Conditions
by Wenfeng Chen, Fei Yan, Ning Wang, Jiale He and Yiqi Wu
Electronics 2025, 14(22), 4379; https://doi.org/10.3390/electronics14224379 - 10 Nov 2025
Viewed by 636
Abstract
Three-dimensional (3D) object detection constitutes a fundamental task in the field of environmental perception. While LiDAR provides high-precision 3D geometric data, its performance significantly degrades under adverse weather conditions like dense fog and heavy snow, where point cloud quality deteriorates. To address this [...] Read more.
Three-dimensional (3D) object detection constitutes a fundamental task in the field of environmental perception. While LiDAR provides high-precision 3D geometric data, its performance significantly degrades under adverse weather conditions like dense fog and heavy snow, where point cloud quality deteriorates. To address this challenge, WCGNet is proposed as a robust 3D detection framework that enhances feature representation against weather corruption. The framework introduces two key components: a Weather Codebook module and a Weather-Aware Gating Fusion module. The Weather Codebook, trained on paired clear and adverse weather scenes, learns to store clear-scene reference features, providing structural guidance for foggy scenarios. The Weather-Aware Gating Fusion module then integrates the degraded features with the codebook’s reference features through a spatial attention mechanism, a multi-head attention network, a gating mechanism, and a fusion module to dynamically recalibrate and combine features, thereby effectively restoring weather-robust representations. Additionally, a foggy point cloud dataset, nuScenes-fog, is constructed based on the nuScenes dataset. Systematic evaluations are conducted on nuScenes, nuScenes-fog, and the STF multi-weather dataset. Experimental results indicate that the proposed framework significantly enhances detection performance and generalization capability under challenging weather conditions, demonstrating strong adaptability across different weather scenarios. Full article
(This article belongs to the Special Issue Application of Machine Learning in Graphics and Images, 2nd Edition)
Show Figures

Figure 1

36 pages, 163603 KB  
Article
Multi-Weather DomainShifter: A Comprehensive Multi-Weather Transfer LLM Agent for Handling Domain Shift in Aerial Image Processing
by Yubo Wang, Ruijia Wen, Hiroyuki Ishii and Jun Ohya
J. Imaging 2025, 11(11), 395; https://doi.org/10.3390/jimaging11110395 - 6 Nov 2025
Viewed by 807
Abstract
Recent deep learning-based remote sensing analysis models often struggle with performance degradation due to domain shifts caused by illumination variations (clear to overcast), changing atmospheric conditions (clear to foggy, dusty), and physical scene changes (clear to snowy). Addressing domain shift in aerial image [...] Read more.
Recent deep learning-based remote sensing analysis models often struggle with performance degradation due to domain shifts caused by illumination variations (clear to overcast), changing atmospheric conditions (clear to foggy, dusty), and physical scene changes (clear to snowy). Addressing domain shift in aerial image segmentation is challenging due to limited training data availability, including costly data collection and annotation. We propose Multi-Weather DomainShifter, a comprehensive multi-weather domain transfer system that augments single-domain images into various weather conditions without additional laborious annotation, coordinated by a large language model (LLM) agent. Specifically, we utilize Unreal Engine to construct a synthetic dataset featuring images captured under diverse conditions such as overcast, foggy, and dusty settings. We then propose a latent space style transfer model that generates alternate domain versions based on real aerial datasets. Additionally, we present a multi-modal snowy scene diffusion model with LLM-assisted scene descriptors to add snowy elements into scenes. Multi-weather DomainShifter integrates these two approaches into a tool library and leverages the agent for tool selection and execution. Extensive experiments on the ISPRS Vaihingen and Potsdam dataset demonstrate that domain shift caused by weather change in aerial image-leads to significant performance drops, then verify our proposal’s capacity to adapt models to perform well in shifted domains while maintaining their effectiveness in the original domain. Full article
(This article belongs to the Special Issue Celebrating the 10th Anniversary of the Journal of Imaging)
Show Figures

Figure 1

21 pages, 4223 KB  
Article
The Influence of Information Redundancy on Driving Behavior and Psychological Responses Under Different Fog and Risk Conditions: An Analysis of AR-HUD Interface Designs
by Junfeng Li, Kexin Chen and Mo Chen
Appl. Sci. 2025, 15(20), 11072; https://doi.org/10.3390/app152011072 - 15 Oct 2025
Viewed by 932
Abstract
Adverse road conditions, particularly foggy weather, significantly impair drivers’ abilities to gather information and make judgments in response to unexpected events. To investigate the impact of different Augmented Reality-Head-Up Display (AR-HUD) interfaces (words-only, symbols-only, and words + symbols) on driving behavior, this study [...] Read more.
Adverse road conditions, particularly foggy weather, significantly impair drivers’ abilities to gather information and make judgments in response to unexpected events. To investigate the impact of different Augmented Reality-Head-Up Display (AR-HUD) interfaces (words-only, symbols-only, and words + symbols) on driving behavior, this study simulated driving scenarios under varying visibility and risk levels in foggy conditions, measuring reaction time (RT), time-to-collision (TTC), the maximum lateral acceleration, the maximum longitudinal acceleration, and subjective data. The results indicated that risk levels significantly affected drivers’ RT, TTC, and maximum longitudinal and lateral accelerations. The three interfaces significantly differed in RT and TTC across different risk levels in heavy fog. In light fog, words-only and redundant interfaces significantly affected RT across different risk levels; words-only and symbols-only interfaces significantly affected TTC across different risk levels. In addition, participants responded faster when using text-related interfaces in the subject’s native language. After analyzing data on perceived usability across the three interfaces, the results indicated that under high-risk conditions, both in light fog and heavy fog, participants rated the redundant interface as having higher usability and preferred the redundant interfaces. Based on these findings, this paper proposes the following design strategies for AR-HUD visual interfaces: (1) Under low-risk foggy driving conditions, all three interface types are effective and applicable. (2) Under high-risk foggy driving conditions, redundant interface design is recommended. Although it may not significantly improve driving performance, this interface type was subjectively perceived as more useful and preferred by the subjects. The findings of this study provide support for design of AR-HUD interfaces, contributing to enhanced driving safety and human–machine interaction experience under complex meteorological conditions. This offers practical implications for the development and optimization of intelligent vehicle systems. Full article
Show Figures

Figure 1

25 pages, 18016 KB  
Article
Joint Modeling of Pixel-Wise Visibility and Fog Structure for Real-World Scene Understanding
by Jiayu Wu, Jiaheng Li, Jianqiang Wang, Xuezhe Xu, Sidan Du and Yang Li
Atmosphere 2025, 16(10), 1161; https://doi.org/10.3390/atmos16101161 - 4 Oct 2025
Viewed by 768
Abstract
Reduced visibility caused by foggy weather has a significant impact on transportation systems and driving safety, leading to increased accident risks and decreased operational efficiency. Traditional methods rely on expensive physical instruments, limiting their scalability. To address this challenge in a cost-effective manner, [...] Read more.
Reduced visibility caused by foggy weather has a significant impact on transportation systems and driving safety, leading to increased accident risks and decreased operational efficiency. Traditional methods rely on expensive physical instruments, limiting their scalability. To address this challenge in a cost-effective manner, we propose a two-stage network for visibility estimation from stereo image inputs. The first stage computes scene depth via stereo matching, while the second stage fuses depth and texture information to estimate metric-scale visibility. Our method produces pixel-wise visibility maps through a physically constrained, progressive supervision strategy, providing rich spatial visibility distributions beyond a single global value. Moreover, it enables the detection of patchy fog, allowing a more comprehensive understanding of complex atmospheric conditions. To facilitate training and evaluation, we propose an automatic fog-aware data generation pipeline that incorporates both synthetically rendered foggy images and real-world captures. Furthermore, we construct a large-scale dataset encompassing diverse scenarios. Extensive experiments demonstrate that our method achieves state-of-the-art performance in both visibility estimation and patchy fog detection. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

17 pages, 5189 KB  
Article
YOLO-Extreme: Obstacle Detection for Visually Impaired Navigation Under Foggy Weather
by Wei Wang, Bin Jing, Xiaoru Yu, Wei Zhang, Shengyu Wang, Ziqi Tang and Liping Yang
Sensors 2025, 25(14), 4338; https://doi.org/10.3390/s25144338 - 11 Jul 2025
Cited by 2 | Viewed by 2327
Abstract
Visually impaired individuals face significant challenges in navigating safely and independently, particularly under adverse weather conditions such as fog. To address this issue, we propose YOLO-Extreme, an enhanced object detection framework based on YOLOv12, specifically designed for robust navigation assistance in foggy environments. [...] Read more.
Visually impaired individuals face significant challenges in navigating safely and independently, particularly under adverse weather conditions such as fog. To address this issue, we propose YOLO-Extreme, an enhanced object detection framework based on YOLOv12, specifically designed for robust navigation assistance in foggy environments. The proposed architecture incorporates three novel modules: the Dual-Branch Bottleneck Block (DBB) for capturing both local spatial and global semantic features, the Multi-Dimensional Collaborative Attention Module (MCAM) for joint spatial-channel attention modeling to enhance salient obstacle features and reduce background interference in foggy conditions, and the Channel-Selective Fusion Block (CSFB) for robust multi-scale feature integration. Comprehensive experiments conducted on the Real-world Task-driven Traffic Scene (RTTS) foggy dataset demonstrate that YOLO-Extreme achieves state-of-the-art detection accuracy and maintains high inference speed, outperforming existing dehazing-and-detect and mainstream object detection methods. To further verify the generalization capability of the proposed framework, we also performed cross-dataset experiments on the Foggy Cityscapes dataset, where YOLO-Extreme consistently demonstrated superior detection performance across diverse foggy urban scenes. The proposed framework significantly improves the reliability and safety of assistive navigation for visually impaired individuals under challenging weather conditions, offering practical value for real-world deployment. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

21 pages, 9849 KB  
Article
A Motion Control Strategy for a Blind Hexapod Robot Based on Reinforcement Learning and Central Pattern Generator
by Lei Wang, Ruiwen Li, Xiaoxiao Wang, Weidong Gao and Yiyang Chen
Symmetry 2025, 17(7), 1058; https://doi.org/10.3390/sym17071058 - 4 Jul 2025
Cited by 2 | Viewed by 1443
Abstract
Hexapod robots that use external sensors to sense the environment are susceptible to factors such as light intensity or foggy weather. This effect leads to a drastic decrease in the motility of the hexapod robot. This paper proposes a motion control strategy for [...] Read more.
Hexapod robots that use external sensors to sense the environment are susceptible to factors such as light intensity or foggy weather. This effect leads to a drastic decrease in the motility of the hexapod robot. This paper proposes a motion control strategy for a blind hexapod robot. The hexapod robot is symmetrical and its environmental sensing capability is obtained by collecting proprioceptive signals from internal sensors, allowing it to pass through rugged terrain without the need for external sensors. The motion gait of the hexapod robot is generated by a central pattern generator (CPG) network constructed by Hopf oscillators. This gait is a periodic gait controlled by specific parameters given in advance. A policy network is trained in the target terrain using deep reinforcement learning (DRL). The trained policy network is able to fine-tune specific parameters by acquiring information about the current terrain. Thus, an adaptive gait is obtained. The experimental results show that the adaptive gait enables the hexapod robot to stably traverse various complex terrains. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

25 pages, 3014 KB  
Article
Performance Assessment of Low- and Medium-Cost PM2.5 Sensors in Real-World Conditions in Central Europe
by Bushra Atfeh, Zoltán Barcza, Veronika Groma, Ágoston Vilmos Tordai and Róbert Mészáros
Atmosphere 2025, 16(7), 796; https://doi.org/10.3390/atmos16070796 - 30 Jun 2025
Cited by 3 | Viewed by 4313
Abstract
In addition to the use of reference instruments, low-cost sensors (LCSs) are becoming increasingly popular for air quality monitoring both indoors and outdoors. These sensors provide real-time measurements of pollutants and facilitate better spatial and temporal coverage. However, these simpler devices are typically [...] Read more.
In addition to the use of reference instruments, low-cost sensors (LCSs) are becoming increasingly popular for air quality monitoring both indoors and outdoors. These sensors provide real-time measurements of pollutants and facilitate better spatial and temporal coverage. However, these simpler devices are typically characterised by lower accuracy and precision and can be more sensitive to the environmental conditions than the reference instruments. It is therefore crucial to characterise the applicability and limitations of these instruments, for which a possible solution is their comparison with reference measurements in real-world conditions. To this end, a measurement campaign has been carried out to evaluate the PM2.5 readings of several low- and medium-cost air quality instruments of different types and categories (IQAir AirVisual Pro, TSI DustTrak™ II Aerosol Monitor 8532, Xiaomi Mijia Air Detector, and Xiaomi Smartmi PM2.5 Air Detector). A GRIMM EDM180 instrument was used as the reference. This campaign took place in Budapest, Hungary, from 12 November to 15 December 2020, during typically humid and foggy weather conditions, when the air pollution level was high due to the increased anthropogenic emissions, including wood burning for heating purposes. The results indicate that the individual sensors tracked the dynamics of PM2.5 concentration changes well (in a linear fashion), but the readings deviated from the reference measurements to varying degrees. Even though the AirVisual sensors performed generally well (0.85 < R2 < 0.93), the accuracy of the units showed inconsistency (13–93%) with typical overestimation, and their readings were significantly affected by elevated relative humidity levels and by temperature. Despite the overall overestimation of PM2.5 by the Xiaomi sensors, they also exhibited strong correlation coefficients with the reference, with R2 values of 0.88 and 0.94. TSI sensors exhibited slight underestimations with high explained variance (R2 = 0.93–0.94) and good accuracy. The results indicated that despite the inherent bias, the low-cost sensors are capable of capturing the temporal variability of PM2.5, thus providing relevant information. After simple and multiple linear regression-based correction, the low-cost sensors provided acceptable results. The results indicate that sensor data correction is a necessary prerequisite for the usability of the instruments. The ensemble method is a reasonable alternative for more accurate estimations of PM2.5. Full article
Show Figures

Figure 1

36 pages, 122050 KB  
Article
GAML-YOLO: A Precise Detection Algorithm for Extracting Key Features from Complex Environments
by Lihu Pan, Zhiyang Xue and Kaiqiang Zhang
Electronics 2025, 14(13), 2523; https://doi.org/10.3390/electronics14132523 - 21 Jun 2025
Cited by 1 | Viewed by 1603
Abstract
This study addresses three major challenges in non-motorized vehicle rider helmet detection: multi-spectral interference between the helmet and hair color (HSV spatial similarity > 0.82), target occlusion in high-density traffic flows (with peak density reaching 11.7 vehicles/frame), and perception degradation under complex weather [...] Read more.
This study addresses three major challenges in non-motorized vehicle rider helmet detection: multi-spectral interference between the helmet and hair color (HSV spatial similarity > 0.82), target occlusion in high-density traffic flows (with peak density reaching 11.7 vehicles/frame), and perception degradation under complex weather conditions (such as overcast, foggy, and strong light interference). To tackle these issues, we developed the GMAL-YOLO detection algorithm. This algorithm enhances feature representation by constructing a Feature-Enhanced Neck Network (FENN) that integrates both global and local features. It employs the Global Mamba Architecture Enhancement (GMET) to reduce parameter size while strengthening global context capturing ability. It also incorporates Multi-Scale Spatial Pyramid Pooling (MSPP) combined with multi-scale feature extraction to improve the model’s robustness. The enhanced channel attention mechanism with self-attention (ECAM) is designed to enhance local feature extraction and stabilize deep feature learning through partial convolution and residual learning, resulting in a 13.04% improvement in detection precision under occlusion scenarios. Furthermore, the model’s convergence speed and localization precision are optimized using the modified Enhanced Precision-IoU loss function(EP-IoU). Experimental results demonstrate that GMAL-YOLO outperforms existing algorithms on the self-constructed HelmetVision dataset and public datasets. Specifically, in extreme scenarios, the false detection rate is reduced by 17.3%, and detection precision in occluded scenes is improved by 13.6%, providing an effective technical solution for intelligent traffic surveillance. Full article
Show Figures

Figure 1

Back to TopTop