Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (60)

Search Parameters:
Keywords = foggy environment

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 5189 KiB  
Article
YOLO-Extreme: Obstacle Detection for Visually Impaired Navigation Under Foggy Weather
by Wei Wang, Bin Jing, Xiaoru Yu, Wei Zhang, Shengyu Wang, Ziqi Tang and Liping Yang
Sensors 2025, 25(14), 4338; https://doi.org/10.3390/s25144338 - 11 Jul 2025
Viewed by 533
Abstract
Visually impaired individuals face significant challenges in navigating safely and independently, particularly under adverse weather conditions such as fog. To address this issue, we propose YOLO-Extreme, an enhanced object detection framework based on YOLOv12, specifically designed for robust navigation assistance in foggy environments. [...] Read more.
Visually impaired individuals face significant challenges in navigating safely and independently, particularly under adverse weather conditions such as fog. To address this issue, we propose YOLO-Extreme, an enhanced object detection framework based on YOLOv12, specifically designed for robust navigation assistance in foggy environments. The proposed architecture incorporates three novel modules: the Dual-Branch Bottleneck Block (DBB) for capturing both local spatial and global semantic features, the Multi-Dimensional Collaborative Attention Module (MCAM) for joint spatial-channel attention modeling to enhance salient obstacle features and reduce background interference in foggy conditions, and the Channel-Selective Fusion Block (CSFB) for robust multi-scale feature integration. Comprehensive experiments conducted on the Real-world Task-driven Traffic Scene (RTTS) foggy dataset demonstrate that YOLO-Extreme achieves state-of-the-art detection accuracy and maintains high inference speed, outperforming existing dehazing-and-detect and mainstream object detection methods. To further verify the generalization capability of the proposed framework, we also performed cross-dataset experiments on the Foggy Cityscapes dataset, where YOLO-Extreme consistently demonstrated superior detection performance across diverse foggy urban scenes. The proposed framework significantly improves the reliability and safety of assistive navigation for visually impaired individuals under challenging weather conditions, offering practical value for real-world deployment. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

21 pages, 9849 KiB  
Article
A Motion Control Strategy for a Blind Hexapod Robot Based on Reinforcement Learning and Central Pattern Generator
by Lei Wang, Ruiwen Li, Xiaoxiao Wang, Weidong Gao and Yiyang Chen
Symmetry 2025, 17(7), 1058; https://doi.org/10.3390/sym17071058 - 4 Jul 2025
Viewed by 350
Abstract
Hexapod robots that use external sensors to sense the environment are susceptible to factors such as light intensity or foggy weather. This effect leads to a drastic decrease in the motility of the hexapod robot. This paper proposes a motion control strategy for [...] Read more.
Hexapod robots that use external sensors to sense the environment are susceptible to factors such as light intensity or foggy weather. This effect leads to a drastic decrease in the motility of the hexapod robot. This paper proposes a motion control strategy for a blind hexapod robot. The hexapod robot is symmetrical and its environmental sensing capability is obtained by collecting proprioceptive signals from internal sensors, allowing it to pass through rugged terrain without the need for external sensors. The motion gait of the hexapod robot is generated by a central pattern generator (CPG) network constructed by Hopf oscillators. This gait is a periodic gait controlled by specific parameters given in advance. A policy network is trained in the target terrain using deep reinforcement learning (DRL). The trained policy network is able to fine-tune specific parameters by acquiring information about the current terrain. Thus, an adaptive gait is obtained. The experimental results show that the adaptive gait enables the hexapod robot to stably traverse various complex terrains. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

25 pages, 5708 KiB  
Article
AEA-YOLO: Adaptive Enhancement Algorithm for Challenging Environment Object Detection
by Abdulrahman Kariri and Khaled Elleithy
AI 2025, 6(7), 132; https://doi.org/10.3390/ai6070132 - 20 Jun 2025
Viewed by 803
Abstract
Despite deep learning-based object detection techniques showing promising results, identifying items from low-quality images under unfavorable weather settings remains challenging because of balancing demands and overlooking useful latent information. On the other hand, YOLO is being developed for real-time object detection, addressing limitations [...] Read more.
Despite deep learning-based object detection techniques showing promising results, identifying items from low-quality images under unfavorable weather settings remains challenging because of balancing demands and overlooking useful latent information. On the other hand, YOLO is being developed for real-time object detection, addressing limitations of current models, which struggle with low accuracy and high resource requirements. To address these issues, we provide an Adaptive Enhancement Algorithm YOLO (AEA-YOLO) framework that allows for an enhancement in each image for improved detection capabilities. A lightweight Parameter Prediction Network (PPN) containing only six thousand parameters predicts scene-adaptive coefficients for a differentiable Image Enhancement Module (IEM), and the enhanced image is then processed by a standard YOLO detector, called the Detection Network (DN). Adaptively processing images in both favorable and unfavorable weather conditions is possible with our suggested method. Extremely encouraging experimental results compared with existing models show that our suggested approach achieves 7% and more than 12% in mean average precision (mAP) on the PASCAL VOC Foggy artificially degraded and the Real-world Task-driven Testing Set (RTTS) datasets. Moreover, our approach achieves good results compared with other state-of-the-art and adaptive domain models of object detection in normal and challenging environments. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

18 pages, 4774 KiB  
Article
InfraredStereo3D: Breaking Night Vision Limits with Perspective Projection Positional Encoding and Groundbreaking Infrared Dataset
by Yuandong Niu, Limin Liu, Fuyu Huang, Juntao Ma, Chaowen Zheng, Yunfeng Jiang, Ting An, Zhongchen Zhao and Shuangyou Chen
Remote Sens. 2025, 17(12), 2035; https://doi.org/10.3390/rs17122035 - 13 Jun 2025
Viewed by 456
Abstract
In fields such as military reconnaissance, forest fire prevention, and autonomous driving at night, there is an urgent need for high-precision three-dimensional reconstruction in low-light or night environments. The acquisition of remote sensing data by RGB cameras relies on external light, resulting in [...] Read more.
In fields such as military reconnaissance, forest fire prevention, and autonomous driving at night, there is an urgent need for high-precision three-dimensional reconstruction in low-light or night environments. The acquisition of remote sensing data by RGB cameras relies on external light, resulting in a significant decline in image quality and making it difficult to meet the task requirements. The method based on lidar has poor imaging effects in rainy and foggy weather, close-range scenes, and scenarios requiring thermal imaging data. In contrast, infrared cameras can effectively overcome this challenge because their imaging mechanisms are different from those of RGB cameras and lidar. However, the research on three-dimensional scene reconstruction of infrared images is relatively immature, especially in the field of infrared binocular stereo matching. There are two main challenges given this situation: first, there is a lack of a dataset specifically for infrared binocular stereo matching; second, the lack of texture information in infrared images causes a limit in the extension of the RGB method to the infrared reconstruction problem. To solve these problems, this study begins with the construction of an infrared binocular stereo matching dataset and then proposes an innovative perspective projection positional encoding-based transformer method to complete the infrared binocular stereo matching task. In this paper, a stereo matching network combined with transformer and cost volume is constructed. The existing work in the positional encoding of the transformer usually uses a parallel projection model to simplify the calculation. Our method is based on the actual perspective projection model so that each pixel is associated with a different projection ray. It effectively solves the problem of feature extraction and matching caused by insufficient texture information in infrared images and significantly improves matching accuracy. We conducted experiments based on the infrared binocular stereo matching dataset proposed in this paper. Experiments demonstrated the effectiveness of the proposed method. Full article
(This article belongs to the Collection Visible Infrared Imaging Radiometers and Applications)
Show Figures

Figure 1

21 pages, 7844 KiB  
Article
WRRT-DETR: Weather-Robust RT-DETR for Drone-View Object Detection in Adverse Weather
by Bei Liu, Jiangliang Jin, Yihong Zhang and Chen Sun
Drones 2025, 9(5), 369; https://doi.org/10.3390/drones9050369 - 14 May 2025
Cited by 1 | Viewed by 1673
Abstract
With the rapid advancement of UAV technology, robust object detection under adverse weather conditions has become critical for enhancing UAVs’ environmental perception. However, object detection in such challenging conditions remains a significant hurdle, and standardized evaluation benchmarks are still lacking. To bridge this [...] Read more.
With the rapid advancement of UAV technology, robust object detection under adverse weather conditions has become critical for enhancing UAVs’ environmental perception. However, object detection in such challenging conditions remains a significant hurdle, and standardized evaluation benchmarks are still lacking. To bridge this gap, we introduce the Adverse Weather Object Detection (AWOD) dataset—a large-scale dataset tailored for object detection in complex maritime environments. The AWOD dataset comprises 20,000 images captured under three representative adverse weather conditions: foggy, flare, and low-light. To address the challenges of scale variation and visual degradation introduced by harsh weather, we propose WRRT-DETR, a weather-robust object detection framework optimized for small objects. Within this framework, we design a gated single-head global–local attention backbone block (GLCE) to fuse local convolutional features with global attention, enhancing small object distinguishability. Additionally, a Frequency–Spatial Feature Augmentation Module (FSAE) is introduced to incorporate frequency-domain information for improved robustness, while an Attention-based Cross-Fusion Module (ACFM) facilitates the integration of multi-scale features. Experimental results demonstrate that WRRT-DETR outperforms SOTA methods on the AWOD dataset, exhibiting superior robustness and detection accuracy in complex weather conditions. Full article
Show Figures

Figure 1

21 pages, 5674 KiB  
Article
Reality Head-Up Display Navigation Design in Extreme Weather Conditions: Enhancing Driving Experience in Rain and Fog
by Qi Zhu and Ziqi Liu
Electronics 2025, 14(9), 1745; https://doi.org/10.3390/electronics14091745 - 25 Apr 2025
Cited by 1 | Viewed by 761
Abstract
This study investigates the impact of extreme weather conditions (specifically heavy rain and fog) on drivers’ situational awareness by analyzing variations in illumination levels. The primary objective is to identify optimal color wavelengths for low-light environments, thereby providing a theoretical foundation for the [...] Read more.
This study investigates the impact of extreme weather conditions (specifically heavy rain and fog) on drivers’ situational awareness by analyzing variations in illumination levels. The primary objective is to identify optimal color wavelengths for low-light environments, thereby providing a theoretical foundation for the design of augmented reality head-up display in adverse weather conditions. A within-subjects experimental design was employed with 26 participants in a simulated driving environment. Participants were exposed to different illumination levels and AR-HUD colors. Eye-tracking metrics, including fixation duration, visit duration, and fixation count, were recorded alongside situational awareness ratings to assess cognitive load and information processing efficiency. The results revealed that the yellow AR-HUD significantly enhanced situational awareness and reduced cognitive load in foggy conditions. While subjective assessments indicated no substantial effect of lighting conditions, objective measurements demonstrated the superior effectiveness of the yellow AR-HUD under foggy weather. These findings suggest that yellow AR-HUD navigation icons are more suitable for extreme weather environments, offering potential improvements in driving performance and overall road safety. Full article
Show Figures

Figure 1

31 pages, 24332 KiB  
Article
IDDNet: Infrared Object Detection Network Based on Multi-Scale Fusion Dehazing
by Shizun Sun, Shuo Han, Junwei Xu, Jie Zhao, Ziyu Xu, Lingjie Li, Zhaoming Han and Bo Mo
Sensors 2025, 25(7), 2169; https://doi.org/10.3390/s25072169 - 29 Mar 2025
Viewed by 575
Abstract
In foggy environments, infrared images suffer from reduced contrast, degraded details, and blurred objects, which impair detection accuracy and real-time performance. To tackle these issues, we propose IDDNet, a lightweight infrared object detection network that integrates multi-scale fusion dehazing. IDDNet includes a multi-scale [...] Read more.
In foggy environments, infrared images suffer from reduced contrast, degraded details, and blurred objects, which impair detection accuracy and real-time performance. To tackle these issues, we propose IDDNet, a lightweight infrared object detection network that integrates multi-scale fusion dehazing. IDDNet includes a multi-scale fusion dehazing (MSFD) module, which uses multi-scale feature fusion to eliminate haze interference while preserving key object details. A dedicated dehazing loss function, DhLoss, further improves the dehazing effect. In addition to MSFD, IDDNet incorporates three main components: (1) bidirectional polarized self-attention, (2) a weighted bidirectional feature pyramid network, and (3) multi-scale object detection layers. This architecture ensures high detection accuracy and computational efficiency. A two-stage training strategy optimizes the model’s performance, enhancing its accuracy and robustness in foggy environments. Extensive experiments on public datasets demonstrate that IDDNet achieves 89.4% precision and 83.9% AP, showing its superior accuracy, processing speed, generalization, and robust detection performance. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

17 pages, 3229 KiB  
Article
Impacts of Climate Change on Suitable Habitat Areas of Larix chinensis in the Qinling Mountains, Shaanxi Province, China
by Ruixiong Deng, Xin Chen, Kaitong Xiao, Ciai Yu, Qiang Zhang, Hang Ning, Lin Wu and Qiang Xiao
Diversity 2025, 17(2), 140; https://doi.org/10.3390/d17020140 - 19 Feb 2025
Viewed by 561
Abstract
Larix chinensis Mill., the sole tree species that can form pure forests at the timberline of the Qinling Mountains, plays a crucial role in maintaining the stability of high-altitude ecosystems. Owing to its special habitat requirements and fragmented distribution pattern, populations of L. [...] Read more.
Larix chinensis Mill., the sole tree species that can form pure forests at the timberline of the Qinling Mountains, plays a crucial role in maintaining the stability of high-altitude ecosystems. Owing to its special habitat requirements and fragmented distribution pattern, populations of L. chinensis are in a clear degenerating stage. Numerous studies have underscored the significant effect of climate change on high-altitude vegetation. However, studies focusing on the shifts in the distribution of L. chinensis habitats and the key environmental factors hindering their suitable distribution remain limited. Therefore, this study aimed to explore the influence of climate change on the future potential distribution of L. chinensis in order to understand the response of timberlines to climate change. In this study, random forest algorithms were applied to project the future potential distribution of L. chinensis across the Qinling Mountains. The results found that temperature and precipitation play crucial roles in limiting the distribution of L. chinensis, particularly in cold–humid climates and rainy, foggy environments, which contribute to its patchy distribution pattern. Currently, L. chinensis populations are distributed in Taibai Mountain and its surrounding alpine areas, concentrated at elevations of 2900–3300 m and on southern slopes of 15–35°, covering approximately 3361 km2. The ecological niche of L. chinensis is relatively narrow in terms of these environmental variables differing from the prevailing climate in the Qinling Mountains. During past climatic conditions or the last interglacial period (LIG period), the potential distribution range of L. chinensis gradually reduced, especially in low-elevation areas, nearly disappearing altogether. Projections under future climate scenarios suggest the contraction and fragmentation of suitable habitats for L. chinensis. The response of L. chinensis to the RCP 8.5 scenario exhibited the most pronounced changes, followed by the RCP 4.5 scenario. Under all climate scenarios in the 2050s, L. chinensis-suitable distribution exhibited varying degrees of reduction. Under the RCP 8.5 scenario, a significant decrease in suitable distribution is projected. Suitable distribution will continually decrease by the 2070s, with the most significant decline projected under the RCP 2.6 scenario. In conclusion, our findings not only offer management strategies for the populations of L. chinensis amidst climate change but also serve as crucial references for some endangered tree species in climate-sensitive areas. Full article
Show Figures

Graphical abstract

23 pages, 4590 KiB  
Article
Foggy Drone Teacher: Domain Adaptive Drone Detection Under Foggy Conditions
by Guida Zheng, Benying Tan, Jingxin Wu, Xiao Qin, Yujie Li and Shuxue Ding
Drones 2025, 9(2), 146; https://doi.org/10.3390/drones9020146 - 16 Feb 2025
Cited by 2 | Viewed by 1088
Abstract
With the growing use of drones, efficient detection algorithms are crucial, especially under adverse weather conditions. Most existing drone detection algorithms perform well only in clear weather, resulting in significant performance drops in foggy conditions. This study focuses on improving drone detection in [...] Read more.
With the growing use of drones, efficient detection algorithms are crucial, especially under adverse weather conditions. Most existing drone detection algorithms perform well only in clear weather, resulting in significant performance drops in foggy conditions. This study focuses on improving drone detection in foggy environments using the Mean Teacher framework for domain adaptation. The Mean Teacher framework’s performance relies on the quality of the teacher model’s pseudo-labels. To enhance the quality of the pseudo-labels from the teacher model, we introduce Foggy Drone Teacher (FDT), which includes three key components: (1) Adaptive Style and Context Augmentation to reduce domain shift and improve pseudo-label quality; (2) Simplified Domain Alignment with a novel adversarial strategy to boost domain adaptation; and (3) Progressive Domain Adaptation Training, a two-stage process that helps the teacher model produce more stable and accurate pseudo-labels. In addition, owing to the lack of publicly available data, we created Foggy Drone Dataset (FDD) to support this research. Extensive experiments show that our model achieves a 21.1-point increase in AP0.5 compared to the baseline and outperforms state-of-the-art models. This method significantly improves drone detection accuracy in foggy conditions. Full article
(This article belongs to the Special Issue Detection, Identification and Tracking of UAVs and Drones)
Show Figures

Figure 1

20 pages, 6412 KiB  
Article
Confidence-Feature Fusion: A Novel Method for Fog Density Estimation in Object Detection Systems
by Zhiyi Li, Songtao Zhang, Zihan Fu, Fanlei Meng and Lijuan Zhang
Electronics 2025, 14(2), 219; https://doi.org/10.3390/electronics14020219 - 7 Jan 2025
Cited by 1 | Viewed by 901
Abstract
Foggy weather poses significant challenges to outdoor computer vision tasks, such as object detection, by degrading image quality and reducing algorithm reliability. In this paper, we present a novel model for estimating fog density in outdoor scenes, aiming to enhance object detection performance [...] Read more.
Foggy weather poses significant challenges to outdoor computer vision tasks, such as object detection, by degrading image quality and reducing algorithm reliability. In this paper, we present a novel model for estimating fog density in outdoor scenes, aiming to enhance object detection performance under varying foggy conditions. Using a support vector machine (SVM) classification framework, the proposed model categorizes unknown images into distinct fog density levels based on both global and local fog-relevant features. Key features such as entropy, contrast, and dark channel information are extracted to quantify the effects of fog on image clarity and object visibility. Moreover, we introduce an innovative region selection method tailored to images without detectable objects, ensuring robust feature extraction. Evaluation on synthetic datasets with varying fog densities demonstrates a classification accuracy of 85.8%, surpassing existing methods in terms of correlation coefficients and robustness. Beyond accurate fog density estimation, this approach provides valuable insights into the impact of fog on object detection, contributing to safer navigation in foggy environments. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Image and Video Processing)
Show Figures

Figure 1

20 pages, 3034 KiB  
Article
A Nonlinear Rebalanced Control Compensation Model for Visual Information of Drivers in the Foggy Section of Expressways
by Xiaolei Li and Qianghui Song
Appl. Sci. 2025, 15(1), 407; https://doi.org/10.3390/app15010407 - 4 Jan 2025
Viewed by 822
Abstract
To obtain the optimal driving visual guidance methods in sudden low-visibility fog environments, it is crucial to analyze the changes in visual characteristics and information demand under low-visibility foggy conditions. The paper constructs a driving visual information demand model for foggy environments based [...] Read more.
To obtain the optimal driving visual guidance methods in sudden low-visibility fog environments, it is crucial to analyze the changes in visual characteristics and information demand under low-visibility foggy conditions. The paper constructs a driving visual information demand model for foggy environments based on visual information input and output, using Shannon’s theory and feedback control theory. Two types of foggy road sections with the same visibility, one with guidance lights and one without, were selected for real-vehicle experiments based on the driver’s blood pressure, heart rate, and driving gaze domain tests. The study found the following: (1) In sudden foggy environments, the amount of driving information obtained by drivers decreases instantly with a sudden drop in visibility, failing to meet the information demand for driving cognition, thereby disrupting the dynamic balance state of driving based on speed, visibility, and other road environment factors. The experiment also found that in low-visibility environments, the radius of the human eye’s visual gaze domain becomes smaller, with the gaze range mainly concentrated directly in front of the vehicle, and the lower the visibility, the smaller the gaze domain range; (2) Foggy conditions affect changes in drivers’ blood pressure and heart rate. Installing guidance lights with sufficient illumination at foggy sections to compensate for drivers’ visual information can effectively supplement the visual information required for safe driving; (3) The experiment indicates that the guidance effect of the lights is most pronounced when visibility is within the range of [50 m, 150 m]; however, when visibility is above 500 m, the presence of guidance lights can, to some extent, affect driving safety and increase the risk of accidents. Full article
(This article belongs to the Section Transportation and Future Mobility)
Show Figures

Figure 1

23 pages, 6756 KiB  
Article
Vehicle Target Detection of Autonomous Driving Vehicles in Foggy Environments Based on an Improved YOLOX Network
by Zhaohui Liu, Huiru Zhang and Lifei Lin
Sensors 2025, 25(1), 194; https://doi.org/10.3390/s25010194 - 1 Jan 2025
Cited by 1 | Viewed by 1608
Abstract
To address the problems that exist in the target detection of vehicle-mounted visual sensors in foggy environments, a vehicle target detection method based on an improved YOLOX network is proposed. Firstly, to address the issue of vehicle target feature loss in foggy traffic [...] Read more.
To address the problems that exist in the target detection of vehicle-mounted visual sensors in foggy environments, a vehicle target detection method based on an improved YOLOX network is proposed. Firstly, to address the issue of vehicle target feature loss in foggy traffic scene images, specific characteristics of fog-affected imagery are integrated into the network training process. This not only augments the training data but also improves the robustness of the network in foggy environments. Secondly, the YOLOX network is optimized by adding attention mechanisms and an image enhancement module to improve feature extraction and training. Additionally, by combining this with the characteristics of foggy environment images, the loss function is optimized to further improve the target detection performance of the network in foggy environments. Finally, transfer learning is applied during the training process, which not only accelerates network convergence and shortens the training time but also further improves the robustness of the network in different environments. Compared with YOLOv5, YOLOv7, and Faster R-CNN networks, the mAP of the improved network increased by 13.57%, 10.3%, and 9.74%, respectively. The results of the comparative experiments from different aspects illustrated that the proposed method significantly enhances the detection performance for vehicle targets in foggy environments. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

16 pages, 2492 KiB  
Article
Improving the Perception of Objects Under Daylight Foggy Conditions in the Surrounding Environment
by Mohamad Mofeed Chaar, Jamal Raiyn and Galia Weidl
Vehicles 2024, 6(4), 2154-2169; https://doi.org/10.3390/vehicles6040105 - 18 Dec 2024
Cited by 1 | Viewed by 2654
Abstract
Autonomous driving (AD) technology has seen significant advancements in recent years; however, challenges remain, particularly in achieving reliable performance under adverse weather conditions such as heavy fog. In response, we propose a multi-class fog density classification approach to enhance the AD system performance. [...] Read more.
Autonomous driving (AD) technology has seen significant advancements in recent years; however, challenges remain, particularly in achieving reliable performance under adverse weather conditions such as heavy fog. In response, we propose a multi-class fog density classification approach to enhance the AD system performance. By categorizing fog density into multiple levels (25%, 50%, 75%, and 100%) and generating separate datasets for each class using the CARLA simulator, we improve the perception accuracy for each specific fog density level and analyze the effects of varying fog intensities. This targeted approach offers benefits such as improved object detection, specialized training for each fog class, and increased generalizability. Our results demonstrate enhanced perception of various objects, including cars, buses, trucks, vans, pedestrians, and traffic lights, across all fog densities. This multi-class fog density method is a promising advancement toward achieving reliable AD performance in challenging weather, improving both the precision and recall of object detection algorithms under diverse fog conditions. Full article
Show Figures

Figure 1

13 pages, 6612 KiB  
Article
Light Absorption Properties of Brown Carbon Aerosol During Winter at a Polluted Rural Site in the North China Plain
by Yanan Tao, Zheng Yang, Xinyu Tan, Peng Cheng, Cheng Wu, Mei Li, Yele Sun, Nan Ma, Yawei Dong, Jiayin Zhang and Tao Du
Atmosphere 2024, 15(11), 1294; https://doi.org/10.3390/atmos15111294 - 28 Oct 2024
Cited by 1 | Viewed by 1149
Abstract
Brown carbon aerosols (BrC), a subfraction of organic aerosols, significantly influence the atmospheric environment, climate and human health. The North China Plain (NCP) is a hotspot for BrC research in China, yet our understanding of the optical properties of BrC in rural regions [...] Read more.
Brown carbon aerosols (BrC), a subfraction of organic aerosols, significantly influence the atmospheric environment, climate and human health. The North China Plain (NCP) is a hotspot for BrC research in China, yet our understanding of the optical properties of BrC in rural regions is still very limited. In this study, we characterize the chemical components and light absorption of BrC at a rural site during winter in the NCP. The average mass concentration of PM1 is 135.1 ± 82.3 μg/m3; organics and nitrate are the main components of PM1. The absorption coefficient of BrC (babs,BrC) is 53.6 ± 45.7 Mm−1, accounting for 39.5 ± 10.2% of the total light absorption at 370 nm. Diurnal variations reveal that the babs,BrC and organics are lower in the afternoon, attributed to the evolution of planetary boundary layers. BrC is mainly emitted locally, and both the aqueous phase and the photooxidation reactions can increase babs,BrC. Notably, the babs,BrC is reduced when RH > 65%. During foggy conditions, reactions in the aqueous phase facilitate the formation of secondary components and contribute to the bleaching of BrC. This process ultimately causes a decrease in both the absorption Ångström exponent (AAE) and the mass absorption efficiency (MAE). In contrast, the babs,BrC, along with AAE and MAE, rise significantly due to substantial primary emissions. This study enhances our understanding of the light absorption of BrC in rural polluted regions of the NCP. Full article
(This article belongs to the Special Issue Development in Carbonaceous Aerosols)
Show Figures

Figure 1

22 pages, 4161 KiB  
Article
A Multi-Tiered Collaborative Network for Optical Remote Sensing Fine-Grained Ship Detection in Foggy Conditions
by Wenbo Zhou, Ligang Li, Bo Liu, Yuan Cao and Wei Ni
Remote Sens. 2024, 16(21), 3968; https://doi.org/10.3390/rs16213968 - 25 Oct 2024
Cited by 3 | Viewed by 1277
Abstract
Ship target detection faces the challenges of complex and changing environments combined with the varied characteristics of ship targets. In practical applications, the complexity of meteorological conditions, uncertainty of lighting, and the diversity of ship target characteristics can affect the accuracy and efficiency [...] Read more.
Ship target detection faces the challenges of complex and changing environments combined with the varied characteristics of ship targets. In practical applications, the complexity of meteorological conditions, uncertainty of lighting, and the diversity of ship target characteristics can affect the accuracy and efficiency of ship target detection algorithms. Most existing target detection methods perform well in conditions of a general scenario but underperform in complex conditions. In this study, a collaborative network for target detection under foggy weather conditions is proposed, aiming to achieve improved accuracy while satisfying the need for real-time detection. First, a collaborative block was designed and SCConv and PCA modules were introduced to enhance the detection of low-quality images. Second, the PAN + FPN structure was adopted to take full advantage of its lightweight and efficient features. Finally, four detection heads were used to enhance the performance. In addition to this, a dataset for foggy ship detection was constructed based on ShipRSImageNet, and the mAP on the dataset reached 48.7%. The detection speed reached 33.3 frames per second (FPS), which is ultimately comparable to YOLOF. It shows that the model proposed has good detection effectiveness for remote sensing ship images during low-contrast foggy days. Full article
Show Figures

Graphical abstract

Back to TopTop