Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (99)

Search Parameters:
Keywords = snow noise

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 7291 KB  
Article
Evaluating LiDAR Perception Algorithms for All-Weather Autonomy
by Himanshu Gupta, Achim J. Lilienthal and Henrik Andreasson
Sensors 2025, 25(24), 7436; https://doi.org/10.3390/s25247436 - 6 Dec 2025
Viewed by 515
Abstract
LiDAR is used in autonomous driving for navigation, obstacle avoidance, and environment mapping. However, adverse weather conditions introduce noise into sensor data, potentially degrading the performance of perception algorithms and compromising the safety and reliability of autonomous driving systems. Hence, in this paper, [...] Read more.
LiDAR is used in autonomous driving for navigation, obstacle avoidance, and environment mapping. However, adverse weather conditions introduce noise into sensor data, potentially degrading the performance of perception algorithms and compromising the safety and reliability of autonomous driving systems. Hence, in this paper, we investigate the limitations of LiDAR perception algorithms in adverse weather conditions, explore ways to mitigate the effects of noise, and propose future research directions to achieve all-weather autonomy with LiDAR sensors. Using real-world datasets and synthetically generated dense fog, we characterize the noise in adverse weather such as snow, rain, and fog; their effect on sensor data; and how to effectively mitigate the noise for tasks like object detection, localization, and SLAM. Specifically, we investigate point cloud filtering methods and compare them based on their ability to denoise point clouds, focusing on processing time, accuracy, and limitations. Additionally, we evaluate the impact of adverse weather on state-of-the-art 3D object detection, localization, and SLAM methods, as well as the effect of point cloud filtering on the algorithms’ performance. We find that point cloud filtering methods are partially successful at removing noise due to adverse weather, but must be fine-tuned for the specific LiDAR, application scenario, and type of adverse weather. 3D object detection was negatively affected by adverse weather, but performance improved with dynamic filtering algorithms. We found that heavy snowfall does not affect localization when using a map constructed in clear weather, but it fails in dense fog due to a low number of feature points. SLAM also failed in thick fog outdoors, but it performed well in heavy snowfall. Filtering algorithms have varied effects on SLAM performance depending on the type of scan-matching algorithm. Full article
(This article belongs to the Special Issue Recent Advances in LiDAR Sensing Technology for Autonomous Vehicles)
Show Figures

Figure 1

19 pages, 6339 KB  
Article
Effect of Coniferous Tree–Shrub Mixtures on Traffic Noise Reduction in Public Spaces
by Qi Meng, Olga Evgrafova and Mengmeng Li
Buildings 2025, 15(23), 4266; https://doi.org/10.3390/buildings15234266 - 26 Nov 2025
Viewed by 266
Abstract
Despite the well-established ability of urban green belts to reduce traffic noise, a comprehensive analysis of the specific role played by mixed coniferous trees and shrubs in noise mitigation remains lacking. This study aimed to clarify how different planting patterns and the characteristics [...] Read more.
Despite the well-established ability of urban green belts to reduce traffic noise, a comprehensive analysis of the specific role played by mixed coniferous trees and shrubs in noise mitigation remains lacking. This study aimed to clarify how different planting patterns and the characteristics of plants affect their noise-reduction performance. To achieve this, noise reduction was measured at 18 roadside green spaces comprising mixed coniferous trees and shrubs in Harbin, China, and Moscow, Russia. The results indicate that in lanes 5–15 m wide, the ‘Abreast’ planting pattern consistently offered greater noise reduction than the ‘Taffy’ configuration at all measured distances (5, 10 and 15 m). In addition, in winter the effectiveness of noise reduction improved due to snow cover, which enhanced the sound-absorbing properties of the vegetation. In our analysis, key factors such as diameter at breast height, minimum height under branches and road width emerged as crucial predictors of traffic noise reduction. Among these, carriageway width and sidewalk width exhibited the strongest correlations with noise attenuation. Finally, we developed a quantitative model for roadside green spaces that incorporates plant characteristics, planting schemes and road features. This model allows us to assess the contribution of each factor to overall noise reduction. The results of this study provide a scientific basis for designing and optimising vegetation-based noise-mitigation strategies to enhance the urban acoustic environment while also offering an analytical framework to support evidence-based urban forestry planning and policy. Full article
(This article belongs to the Special Issue Architecture and Landscape Architecture)
Show Figures

Figure 1

23 pages, 59318 KB  
Article
BAT-Net: Bidirectional Attention Transformer Network for Joint Single-Image Desnowing and Snow Mask Prediction
by Yongheng Zhang
Information 2025, 16(11), 966; https://doi.org/10.3390/info16110966 - 7 Nov 2025
Viewed by 340
Abstract
In the wild, snow is not merely additive noise; it is a non-stationary, semi-transparent veil whose spatial statistics vary with depth, illumination, and wind. Because conventional two-stage pipelines first detect a binary mask and then inpaint the occluded regions, any early mis-classification is [...] Read more.
In the wild, snow is not merely additive noise; it is a non-stationary, semi-transparent veil whose spatial statistics vary with depth, illumination, and wind. Because conventional two-stage pipelines first detect a binary mask and then inpaint the occluded regions, any early mis-classification is irreversibly baked into the final result, leading to over-smoothed textures or ghosting artifacts. We propose BAT-Net, a Bidirectional Attention Transformer Network that frames desnowing as a coupled representation learning problem, jointly disentangling snow appearance and scene radiance in a single forward pass. Our core contributions are as follows: (1) A novel dual-decoder architecture where a background decoder and a snow decoder are coupled via a Bidirectional Attention Module (BAM). The BAM implements a continuous predict–verify–correct mechanism, allowing the background branch to dynamically accept, reject, or refine the snow branch’s occlusion hypotheses, dramatically reducing error accumulation. (2) A lightweight yet effective multi-scale feature fusion scheme comprising a Scale Conversion Module (SCM) and a Feature Aggregation Module (FAM), enabling the model to handle the large scale variance among snowflakes without a prohibitive computational cost. (3) The introduction of the FallingSnow dataset, curated to eliminate the label noise caused by irremovable ground snow in existing benchmarks, providing a cleaner benchmark for evaluating dynamic snow removal. Extensive experiments on synthetic and real-world datasets demonstrate that BAT-Net sets a new state of the art. It achieves a PSNR of 35.78 dB on the CSD dataset, outperforming the best prior model by 1.37 dB, and also achieves top results on SRRS (32.13 dB) and Snow100K (34.62 dB) datasets. The proposed method has significant practical applications in autonomous driving and surveillance systems, where accurate snow removal is crucial for maintaining visual clarity. Full article
(This article belongs to the Special Issue Intelligent Image Processing by Deep Learning, 2nd Edition)
Show Figures

Figure 1

24 pages, 1826 KB  
Article
Cloud and Snow Segmentation via Transformer-Guided Multi-Stream Feature Integration
by Kaisheng Yu, Kai Chen, Liguo Weng, Min Xia and Shengyan Liu
Remote Sens. 2025, 17(19), 3329; https://doi.org/10.3390/rs17193329 - 29 Sep 2025
Viewed by 560
Abstract
Cloud and snow often share comparable visual and structural patterns in satellite observations, making their accurate discrimination and segmentation particularly challenging. To overcome this, we design an innovative Transformer-guided architecture with complementary feature-extraction capabilities. The encoder adopts a dual-path structure, integrating a Transformer [...] Read more.
Cloud and snow often share comparable visual and structural patterns in satellite observations, making their accurate discrimination and segmentation particularly challenging. To overcome this, we design an innovative Transformer-guided architecture with complementary feature-extraction capabilities. The encoder adopts a dual-path structure, integrating a Transformer Encoder Module (TEM) for capturing long-range semantic dependencies and a ResNet18-based convolutional branch for detailed spatial representation. A Feature-Enhancement Module (FEM) is introduced to promote bidirectional interaction and adaptive feature integration between the two pathways. To improve delineation of object boundaries, especially in visually complex areas, we embed a Deep Feature-Extraction Module (DFEM) at the deepest layer of the convolutional stream. This component refines channel-level information to highlight critical features and enhance edge clarity. Additionally, to address noise from intricate backgrounds and ambiguous cloud-snow transitions, we incorporate both a Transformer Fusion Module (TFM) and a Strip Pooling Auxiliary Module (SPAM) in the decoding phase. These modules collaboratively enhance structural recovery and improve robustness in segmentation. Extensive experiments on the CSWV and SPARCS datasets show that our method consistently outperforms state-of-the-art baselines, demonstrating its strong effectiveness and applicability in real-world cloud and snow-detection scenarios. Full article
Show Figures

Figure 1

21 pages, 5771 KB  
Article
SCOPE: Spatial Context-Aware Pointcloud Encoder for Denoising Under the Adverse Weather Conditions
by Hyeong-Geun Kim
Appl. Sci. 2025, 15(18), 10113; https://doi.org/10.3390/app151810113 - 16 Sep 2025
Viewed by 658
Abstract
Reliable LiDAR point clouds are essential for perception in robotics and autonomous driving. However, adverse weather conditions introduce substantial noise that significantly degrades perception performance. To tackle this challenge, we first introduce a novel, point-wise annotated dataset of over 800 scenes, created by [...] Read more.
Reliable LiDAR point clouds are essential for perception in robotics and autonomous driving. However, adverse weather conditions introduce substantial noise that significantly degrades perception performance. To tackle this challenge, we first introduce a novel, point-wise annotated dataset of over 800 scenes, created by collecting and comparing point clouds from real-world adverse and clear weather conditions. Building upon this comprehensive dataset, we propose the Spatial Context-Aware Point Cloud Encoder Network (SCOPE), a deep learning framework that identifies noise by effectively learning spatial relationships from sparse point clouds. SCOPE partitions the input into voxels and utilizes a Voxel Spatial Feature Extractor with contrastive learning to distinguish weather-induced noise from structural points. Experimental results validate SCOPE’s effectiveness, achieving high Intersection-over-Union (mIoU) scores in snow (88.66%), rain (92.33%), and fog (88.77%), with a mean mIoU of 89.92%. These consistent results across diverse scenarios confirm the robustness and practical effectiveness of our method in challenging environments. Full article
(This article belongs to the Special Issue AI-Aided Intelligent Vehicle Positioning in Urban Areas)
Show Figures

Figure 1

14 pages, 2957 KB  
Article
DVIOR: Dynamic Vertical and Low-Intensity Outlier Removal for Efficient Snow Noise Removal from LiDAR Point Clouds in Adverse Weather
by Guanqiang Ruan, Fanhao Kong, Chenglin Ding, Kuo Yang, Tao Hu and Rong Yan
Electronics 2025, 14(18), 3662; https://doi.org/10.3390/electronics14183662 - 16 Sep 2025
Viewed by 870
Abstract
With the advancement of autonomous driving technology, the performance of LiDAR in adverse weather conditions has garnered increasing attention. Traditional denoising algorithms, including intensity-based methods like LIOR (a representative intensity-based filter that relies solely on signal intensity), have limited effectiveness in handling snow [...] Read more.
With the advancement of autonomous driving technology, the performance of LiDAR in adverse weather conditions has garnered increasing attention. Traditional denoising algorithms, including intensity-based methods like LIOR (a representative intensity-based filter that relies solely on signal intensity), have limited effectiveness in handling snow noise, especially in removing dynamic noise points and distinguishing them from environmental features. This paper proposes a Dynamic Vertical and Low-Intensity Outlier Removal (DVIOR) algorithm, specifically designed to optimize LiDAR point cloud data under snowy conditions. The DVIOR algorithm, as an extension of intensity-based filtering augmented with vertical height information, dynamically adjusts filter parameters by combining the height and intensity information of the point cloud, effectively filtering out snow noise while preserving environmental features. In our experiments, the DVIOR algorithm was evaluated on several publicly available adverse weather datasets, including the Winter Adverse Driving Scenarios (WADS), the Canadian Adverse Driving Conditions (CADC), and the Radar Dataset for Autonomous Driving in Adverse weather conditions (RADIATE) datasets. Compared with both the mainstream dynamic distance–intensity hybrid algorithm in recent years, Dynamic Distance–Intensity Outlier Removal (DDIOR), and the representative intensity-based filter LIOR, DVIOR achieved notable improvements: it gained a 10.2-point higher F1-score than DDIOR and an 11.8-point higher F1-score than LIOR (79.00) on the WADS dataset. Additionally, DVIOR performed excellently on the CADC and RADIATE datasets, achieving F1-scores of 87.35 and 86.68, respectively—representing an improvement of 19.82 and 36.9 points over DDIOR and 4.67 and 17.95 points over LIOR (82.68 and 68.73). These results demonstrate that the DVIOR algorithm outperforms existing methods, including both distance–intensity hybrid approaches and intensity-based filters like LIOR, in snow noise removal, particularly in complex snowy environments. Full article
(This article belongs to the Special Issue Signal Processing and AI Applications for Vehicles, 2nd Edition)
Show Figures

Figure 1

22 pages, 7833 KB  
Article
Switch Open-Circuit Fault Diagnosis of the Vienna Rectifier Using the Transformer–BiTCN Network with Improved Snow Geese Algorithm Optimization
by Yaping Deng, Hao Jia, Guangen Lian, Xiaofeng Wang and Yannan Liu
Electronics 2025, 14(18), 3655; https://doi.org/10.3390/electronics14183655 - 15 Sep 2025
Viewed by 533
Abstract
The switch open-circuit fault signal of the Vienna rectifier possesses non-stationary characteristics and is also vulnerable to external interference factors, such as sensor noise and load variation. This phenomenon reduces the performance of traditional methods, including model-based and signal-based algorithms. In order to [...] Read more.
The switch open-circuit fault signal of the Vienna rectifier possesses non-stationary characteristics and is also vulnerable to external interference factors, such as sensor noise and load variation. This phenomenon reduces the performance of traditional methods, including model-based and signal-based algorithms. In order to improve the accuracy, convergence rate, and robustness of diagnosis models, a hybrid deep learning Transformer–BiTCN optimized via ISGA (Improved Snow Geese Algorithm, ISGA) is proposed in this paper. Firstly, to assess the Vienna rectifier’s open-circuit fault signal, the time-varying and non-stationary characteristics generation mechanism is analyzed. Then, combining the fault signal characteristics of the Vienna rectifier, the hybrid deep learning model using Transformer–BiTCN, along with multi-scale feature fusion, is presented to extract hierarchical features, including both global temporal dependencies and local characteristics to enhance fault diagnosis accuracy and model robustness. Finally, the ISGA optimization algorithm with the Bloch initialization strategy and the Rime search mechanism is further presented to optimize the hyperparameters of the Transformer–BiTCN model so as to improve convergence and improve accuracy. Finally, the effectiveness of our proposed method is tested by simulations and experiments. It has been verified that the Transformer–BiTCN along with ISGA optimization is robust to non-stationary open-circuit fault signals and can achieve high diagnosis accuracy with a fast convergence rate. Full article
Show Figures

Figure 1

28 pages, 18957 KB  
Article
Radar-Based Road Surface Classification Using Range-Fast Fourier Transform Learning Models
by Hyunji Lee, Jiyun Kim, Kwangin Ko, Hak Han and Minkyo Youm
Sensors 2025, 25(18), 5697; https://doi.org/10.3390/s25185697 - 12 Sep 2025
Viewed by 1102
Abstract
Traffic accidents caused by black ice have become a serious public safety concern due to their high fatality rates and the limitations of conventional detection systems under low visibility. Millimeter-wave (mmWave) radar, capable of operating reliably in adverse weather and lighting conditions, offers [...] Read more.
Traffic accidents caused by black ice have become a serious public safety concern due to their high fatality rates and the limitations of conventional detection systems under low visibility. Millimeter-wave (mmWave) radar, capable of operating reliably in adverse weather and lighting conditions, offers a promising alternative for road surface monitoring. In this study, six representative road surface conditions—dry, wet, thin-ice, ice, snow, and sludge—were experimentally implemented on asphalt and concrete specimens using a temperature and humidity-controlled chamber. mmWave radar data were repeatedly collected to analyze the temporal variations in reflected signals. The acquired signals were transformed into range-based spectra using Range-Fast Fourier Transform (Range-FFT) and converted into statistical features and graphical representations. These features were used to train and evaluate classification models, including Extreme Gradient Boost (XGBoost), Light Gradient-Boosting Machine (LightGBM), Convolutional Neural Networks (CNN), and Vision Transformer (ViT). While machine learning models performed well under dry and wet conditions, their accuracy declined in hazardous states. Both CNN and ViT demonstrated superior performance across all conditions, with CNN showing consistent stability and ViT exhibiting competitive accuracy with enhanced global pattern-recognition capabilities. Comprehensive robustness evaluation under various noise and blur conditions revealed distinct characteristics of each model architecture. This study demonstrates the feasibility of mmWave radar for reliable road surface condition recognition and suggests potential for improvement through multimodal sensor fusion and time-series analysis. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

18 pages, 1611 KB  
Article
Hybrid Decomposition Strategies and Model Combinatorial Optimization for Runoff Prediction
by Wenbin Hu and Xiaohui Yuan
Water 2025, 17(17), 2560; https://doi.org/10.3390/w17172560 - 29 Aug 2025
Viewed by 1372
Abstract
Runoff prediction plays a critical role in water resource management and flood mitigation. Traditional runoff prediction methods often rely on single-layer optimization frameworks that process the data without decomposition and employ relatively simple prediction models, leading to suboptimal performance. In this study, a [...] Read more.
Runoff prediction plays a critical role in water resource management and flood mitigation. Traditional runoff prediction methods often rely on single-layer optimization frameworks that process the data without decomposition and employ relatively simple prediction models, leading to suboptimal performance. In this study, a novel two-layer optimization framework is proposed that integrates data decomposition techniques with multi-model combination strategies, establishing a closed-loop feedback mechanism between decomposition and prediction processes. The framework employs the Snow Ablation Optimizer (SAO) to optimize combination weights across both layers. Its adaptive fitness function incorporates three evaluation metrics—Mean Absolute Percentage Error (MAPE), Relative Root Mean Square Error (RRMSE), and Nash–Sutcliffe Efficiency (NSE)—to enable adaptive data processing and intelligent model selection. We validated the framework using observational data from Huangzhuang Hydrological Station in the Hanjiang River Basin. The results demonstrate that, at the decomposition layer, optimal performance was achieved by combining non-decomposition, Singular Spectrum Analysis (SSA), and Complementary Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) (with coefficients 0.4061, 0.6115, and −0.0063), paired with the long short-term memory (LSTM) model. At the prediction layer, the proposed algorithm achieved a 32.84% improvement over the best single decomposition method and a 30.21% improvement over the best single combination optimization approach. These findings confirm the framework’s effectiveness in enhancing runoff data decomposition and optimizing multi-model selection. Full article
(This article belongs to the Special Issue Hydrodynamics Science Experiments and Simulations, 2nd Edition)
Show Figures

Figure 1

11 pages, 1010 KB  
Review
Visual Snow Syndrome: Therapeutic Implications
by Kenneth J. Ciuffreda and Daniella Rutner
J. Clin. Med. 2025, 14(17), 6070; https://doi.org/10.3390/jcm14176070 - 27 Aug 2025
Viewed by 3924
Abstract
Visual snow and its syndrome represent a relatively new and enigmatic neurological condition affecting the human sensory, motor, and perceptual systems. In this narrative review, first an overview of the condition and its basic characteristics and demographics are presented. Then, the six therapeutic [...] Read more.
Visual snow and its syndrome represent a relatively new and enigmatic neurological condition affecting the human sensory, motor, and perceptual systems. In this narrative review, first an overview of the condition and its basic characteristics and demographics are presented. Then, the six therapeutic approaches that have been attempted over the past decade are detailed by a simple discussion of the problem with the patient, medications, special chromatic tints, oculomotor training, visual noise adaptation, and environmental changes, which have met with varying degrees of success. Thus far, chromatic tints and oculomotor training appear to be the most successful. Full article
(This article belongs to the Section Ophthalmology)
Show Figures

Figure 1

16 pages, 3972 KB  
Article
Solar Panel Surface Defect and Dust Detection: Deep Learning Approach
by Atta Rahman
J. Imaging 2025, 11(9), 287; https://doi.org/10.3390/jimaging11090287 - 25 Aug 2025
Viewed by 2065
Abstract
In recent years, solar energy has emerged as a pillar of sustainable development. However, maintaining panel efficiency under extreme environmental conditions remains a persistent hurdle. This study introduces an automated defect detection pipeline that leverages deep learning and computer vision to identify five [...] Read more.
In recent years, solar energy has emerged as a pillar of sustainable development. However, maintaining panel efficiency under extreme environmental conditions remains a persistent hurdle. This study introduces an automated defect detection pipeline that leverages deep learning and computer vision to identify five standard anomaly classes: Non-Defective, Dust, Defective, Physical Damage, and Snow on photovoltaic surfaces. To build a robust foundation, a heterogeneous dataset of 8973 images was sourced from public repositories and standardized into a uniform labeling scheme. This dataset was then expanded through an aggressive augmentation strategy, including flips, rotations, zooms, and noise injections. A YOLOv11-based model was trained and fine-tuned using both fixed and adaptive learning rate schedules, achieving a mAP@0.5 of 85% and accuracy, recall, and F1-score above 95% when evaluated across diverse lighting and dust scenarios. The optimized model is integrated into an interactive dashboard that processes live camera streams, issues real-time alerts upon defect detection, and supports proactive maintenance scheduling. Comparative evaluations highlight the superiority of this approach over manual inspections and earlier YOLO versions in both precision and inference speed, making it well suited for deployment on edge devices. Automating visual inspection not only reduces labor costs and operational downtime but also enhances the longevity of solar installations. By offering a scalable solution for continuous monitoring, this work contributes to improving the reliability and cost-effectiveness of large-scale solar energy systems. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

20 pages, 9888 KB  
Article
WeatherClean: An Image Restoration Algorithm for UAV-Based Railway Inspection in Adverse Weather
by Kewen Wang, Shaobing Yang, Zexuan Zhang, Zhipeng Wang, Limin Jia, Mengwei Li and Shengjia Yu
Sensors 2025, 25(15), 4799; https://doi.org/10.3390/s25154799 - 4 Aug 2025
Viewed by 1023
Abstract
UAV-based inspections are an effective way to ensure railway safety and have gained significant attention. However, images captured during complex weather conditions, such as rain, snow, or fog, often suffer from severe degradation, affecting image recognition accuracy. Existing algorithms for removing rain, snow, [...] Read more.
UAV-based inspections are an effective way to ensure railway safety and have gained significant attention. However, images captured during complex weather conditions, such as rain, snow, or fog, often suffer from severe degradation, affecting image recognition accuracy. Existing algorithms for removing rain, snow, and fog have two main limitations: they do not adaptively learn features under varying weather complexities and struggle with managing complex noise patterns in drone inspections, leading to incomplete noise removal. To address these challenges, this study proposes a novel framework for removing rain, snow, and fog from drone images, called WeatherClean. This framework introduces a Weather Complexity Adjustment Factor (WCAF) in a parameterized adjustable network architecture to process weather degradation of varying degrees adaptively. It also employs a hierarchical multi-scale cropping strategy to enhance the recovery of fine noise and edge structures. Additionally, it incorporates a degradation synthesis method based on atmospheric scattering physical models to generate training samples that align with real-world weather patterns, thereby mitigating data scarcity issues. Experimental results show that WeatherClean outperforms existing methods by effectively removing noise particles while preserving image details. This advancement provides more reliable high-definition visual references for drone-based railway inspections, significantly enhancing inspection capabilities under complex weather conditions and ensuring the safety of railway operations. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

19 pages, 24394 KB  
Article
TFCNet: A Hybrid Architecture for Multi-Task Restoration of Complex Underwater Optical Images
by Shengya Zhao, Xiufen Ye, Xinkui Mei, Shuxiang Guo and Haibin Qi
J. Mar. Sci. Eng. 2025, 13(6), 1090; https://doi.org/10.3390/jmse13061090 - 29 May 2025
Cited by 1 | Viewed by 809
Abstract
Underwater optical images are crucial in marine exploration. However, capturing these images directly often results in color distortion, noise, blurring, and other undesirable effects, all of which originate from the unique physical and chemical properties of underwater environments. Hence, various factors need to [...] Read more.
Underwater optical images are crucial in marine exploration. However, capturing these images directly often results in color distortion, noise, blurring, and other undesirable effects, all of which originate from the unique physical and chemical properties of underwater environments. Hence, various factors need to be comprehensively considered when processing underwater optical images that are severely degraded under complex lighting conditions. Most existing methods resolve one issue at a time, making it challenging for these isolated techniques to maintain consistency when addressing multiple degradation factors simultaneously, often leading to unsatisfactory visual outcomes. Motivated by the global modeling capability of the Transformer, this paper introduces TFCNet, a complex hybrid-architecture network designed for underwater optical image enhancement and restoration. TFCNet combines the benefits of the Transformer in capturing long-range dependencies with the local feature extraction potential of convolutional neural networks, resulting in enhanced restoration results. Compared with baseline methods, the proposed approach demonstrated consistent improvements, where it achieved minimum gains of 0.3 dB in the PSNR and 0.01 in the SSIM and a 0.8 reduction in the RMSE. TFCNet exhibited a commendable performance in complex underwater optical image enhancement and restoration tasks by effectively rectifying color distortion, eliminating marine snow noise to a certain degree, and restoring blur. Full article
(This article belongs to the Special Issue Advancements in Deep-Sea Equipment and Technology, 3rd Edition)
Show Figures

Figure 1

29 pages, 18935 KB  
Article
OSNet: An Edge Enhancement Network for a Joint Application of SAR and Optical Images
by Keyu Ma, Kai Hu, Junyu Chen, Ming Jiang, Yao Xu, Min Xia and Liguo Weng
Remote Sens. 2025, 17(3), 505; https://doi.org/10.3390/rs17030505 - 31 Jan 2025
Viewed by 2021
Abstract
The combined use of synthetic aperture radar (SAR) and optical images for surface observation is gaining increasing attention. Optical images, with their distinct edge features, can accurately classify different objects, while SAR images reveal deeper internal variations. To address the challenge of differing [...] Read more.
The combined use of synthetic aperture radar (SAR) and optical images for surface observation is gaining increasing attention. Optical images, with their distinct edge features, can accurately classify different objects, while SAR images reveal deeper internal variations. To address the challenge of differing feature distributions in multi-source images, we propose an edge enhancement network, OSNet (network for optical and SAR images), designed to jointly extract features from optical and SAR images and enhance edge feature representation. OSNet consists of three core modules: a dual-branch backbone, a synergistic attention integration module, and a global-guided local fusion module. These modules, respectively, handle modality-independent feature extraction, feature sharing, and global-local feature fusion. In the backbone module, we introduce a differentiable Lee filter and a Laplacian edge detection operator in the SAR branch to suppress noise and enhance edge features. Additionally, we designed a multi-source attention fusion module to facilitate cross-modal information exchange between the two branches. We validated OSNet’s performance on segmentation tasks (WHU-OPT-SAR) and regression tasks (SNOW-OPT-SAR). The results show that OSNet improved PA and MIoU by 2.31% and 2.58%, respectively, in the segmentation task, and reduced MAE and RMSE by 3.14% and 4.22%, respectively, in the regression task. Full article
Show Figures

Figure 1

22 pages, 1919 KB  
Article
An Adaptive Multimodal Fusion 3D Object Detection Algorithm for Unmanned Systems in Adverse Weather
by Shenyu Wang, Xinlun Xie, Mingjiang Li, Maofei Wang, Jinming Yang, Zeming Li, Xuehua Zhou and Zhiguo Zhou
Electronics 2024, 13(23), 4706; https://doi.org/10.3390/electronics13234706 - 28 Nov 2024
Cited by 2 | Viewed by 3993
Abstract
Unmanned systems encounter challenging weather conditions during obstacle removal tasks. Researching stable, real-time, and accurate environmental perception methods under such conditions is crucial. Cameras and LiDAR sensors provide different and complementary data. However, the integration of disparate data presents challenges such as feature [...] Read more.
Unmanned systems encounter challenging weather conditions during obstacle removal tasks. Researching stable, real-time, and accurate environmental perception methods under such conditions is crucial. Cameras and LiDAR sensors provide different and complementary data. However, the integration of disparate data presents challenges such as feature mismatches and the fusion of sparse and dense information, which can degrade algorithmic performance. Adverse weather conditions, like rain and snow, introduce noise that further reduces perception accuracy. To address these issues, we propose a novel weather-adaptive bird’s-eye view multi-level co-attention fusion 3D object detection algorithm (BEV-MCAF). This algorithm employs an improved feature extraction network to obtain more effective features. A multimodal feature fusion module has been constructed with BEV image feature generation and a co-attention mechanism for better fusion effects. A multi-scale multimodal joint domain adversarial network (M2-DANet) is proposed to enhance adaptability to adverse weather conditions. The efficacy of BEV-MCAF has been validated on both the nuScenes and Ithaca365 datasets, confirming its robustness and good generalization capability in a variety of bad weather conditions. The findings indicate that our proposed algorithm performs better than the benchmark, showing improved adaptability to harsh weather conditions and enhancing the robustness of UVs, ensuring reliable perception under challenging conditions. Full article
Show Figures

Figure 1

Back to TopTop