Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (56)

Search Parameters:
Keywords = snow scenes

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 5229 KiB  
Article
Exploring the Spectral Variability of Estonian Lakes Using Spaceborne Imaging Spectroscopy
by Alice Fabbretto, Mariano Bresciani, Andrea Pellegrino, Kersti Kangro, Anna Joelle Greife, Lodovica Panizza, François Steinmetz, Joel Kuusk, Claudia Giardino and Krista Alikas
Appl. Sci. 2025, 15(15), 8357; https://doi.org/10.3390/app15158357 - 27 Jul 2025
Viewed by 271
Abstract
This study investigates the potential of spaceborne imaging spectroscopy to support the analysis of the status of two major Estonian lakes, i.e., Lake Peipsi and Lake Võrtsjärv, using data from the PRISMA and EnMAP missions. The study encompasses nine specific applications across 12 [...] Read more.
This study investigates the potential of spaceborne imaging spectroscopy to support the analysis of the status of two major Estonian lakes, i.e., Lake Peipsi and Lake Võrtsjärv, using data from the PRISMA and EnMAP missions. The study encompasses nine specific applications across 12 satellite scenes, including the validation of remote sensing reflectance (Rrs), optical water type classification, estimation of phycocyanin concentration, detection of macrophytes, and characterization of reflectance for lake ice/snow coverage. Rrs validation, which was performed using in situ measurements and Sentinel-2 and Sentinel-3 as references, showed a level of agreement with Spectral Angle < 16°. Hyperspectral imagery successfully captured fine-scale spatial and spectral features not detectable by multispectral sensors, in particular it was possible to identify cyanobacterial pigments and optical variations driven by seasonal and meteorological dynamics. Through the combined use of in situ observations, the study can serve as a starting point for the use of hyperspectral data in northern freshwater systems, offering new insights into ecological processes. Given the increasing global concern over freshwater ecosystem health, this work provides a transferable framework for leveraging new-generation hyperspectral missions to enhance water quality monitoring on a global scale. Full article
Show Figures

Figure 1

37 pages, 12672 KiB  
Article
Optimized Design of Cultural Space in Wuhan Metro: Analysis and Reflection Based on Multi-Source Data
by Zhengcong Wei, Yangxue Hu, Yile Chen and Tianjia Wang
Buildings 2025, 15(13), 2201; https://doi.org/10.3390/buildings15132201 - 23 Jun 2025
Viewed by 642
Abstract
As urbanization has accelerated, rail transit has evolved from being a mere means of transportation to a public area that houses the city’s cultural memory and serves as a crucial portal for the public to understand the culture of the city. As an [...] Read more.
As urbanization has accelerated, rail transit has evolved from being a mere means of transportation to a public area that houses the city’s cultural memory and serves as a crucial portal for the public to understand the culture of the city. As an urban public space with huge passenger flow, the metro (or subway) cultural space has also become a public cultural space, serving communal welfare and representing the image of the city. It is currently attracting more and more attention from the academic community. Wuhan, located in central China, has many subway lines and its engineering construction has set several national firsts, which is a typical sample of urban subway development in China. In this study, we use Python 3.13.0 crawler technology to capture the public’s comments on cultural space of Wuhan metro in social media and adopt SnowNLP sentiment score and LDA thematic clustering analysis to explore the overall quality, distinct characteristics, and deficiencies of Wuhan metro cultural space construction, and propose targeted design optimization strategies based on this study. The main findings are as follows: (1) The metro cultural space is an important window for the public to perceive the city culture, and the public in general shows positive perception of emotions: among the 16,316 data samples, 47.7% are positive comments, 17.8% are neutral comments, and 34.5% are negative comments. (2) Based on the frequency of content in the sample data for metro station exit and entrance space, metro train space, metro concourse and platform space, they are ranked as weak cultural spaces (18%), medium cultural spaces (33%), and strong cultural spaces (49%) in terms of the public’s perception of urban culture. (3) At present, there are certain deficiencies in Wuhan metro cultural space: the circulation paths in concourses and platforms are overly dominant, leaving little space for rest or interaction; the cultural symbols of metro train space are fragmented; the way of articulation between cultural and functional space in the metro station exit and entrance space is weak, and the space is single in form. (4) Wuhan metro cultural space needs to be based on locality landscape expression, functional zoning reorganization, innovative scene creation to optimize the visual symbol system and behavioral symbol system in the space, to establish a good image of the space, and to strengthen the public’s cultural identity and emotional resonance. Full article
(This article belongs to the Special Issue Digital Management in Architectural Projects and Urban Environment)
Show Figures

Figure 1

16 pages, 9530 KiB  
Article
Development of Robust Lane-Keeping Algorithm Using Snow Tire Track Recognition in Snowfall Situations
by Donghyun Kim and Yonghwan Jeong
Sensors 2024, 24(23), 7802; https://doi.org/10.3390/s24237802 - 5 Dec 2024
Viewed by 913
Abstract
This study proposed a robust lane-keeping algorithm designed for snowy road conditions, utilizing a snow tire track detection model based on machine learning. The proposed algorithm is structured into two primary modules: a snow tire track detector and a lane center estimator. The [...] Read more.
This study proposed a robust lane-keeping algorithm designed for snowy road conditions, utilizing a snow tire track detection model based on machine learning. The proposed algorithm is structured into two primary modules: a snow tire track detector and a lane center estimator. The snow tire track detector utilizes YOLOv5, trained on custom datasets generated from public videos captured on snowy roads. Video frames are annotated with the Computer Vision Annotation Tool (CVAT) to identify pixels containing snow tire tracks. To mitigate overfitting, the detector is trained on a combined dataset that incorporates both snow tire track images and road scenes from the Udacity dataset. The lane center estimator uses the detected tire tracks to estimate a reference line for lane keeping. Detected tracks are binarized and transformed into a bird’s-eye view image. Then, skeletonization and Hough transformation techniques are applied to extract tire track lines from the classified pixels. Finally, the Kalman filter estimates the lane center based on tire track lines. Evaluations conducted on unseen images demonstrate that the proposed algorithm provides a reliable lane reference, even under heavy snowfall conditions. Full article
Show Figures

Figure 1

22 pages, 1919 KiB  
Article
An Adaptive Multimodal Fusion 3D Object Detection Algorithm for Unmanned Systems in Adverse Weather
by Shenyu Wang, Xinlun Xie, Mingjiang Li, Maofei Wang, Jinming Yang, Zeming Li, Xuehua Zhou and Zhiguo Zhou
Electronics 2024, 13(23), 4706; https://doi.org/10.3390/electronics13234706 - 28 Nov 2024
Cited by 1 | Viewed by 2367
Abstract
Unmanned systems encounter challenging weather conditions during obstacle removal tasks. Researching stable, real-time, and accurate environmental perception methods under such conditions is crucial. Cameras and LiDAR sensors provide different and complementary data. However, the integration of disparate data presents challenges such as feature [...] Read more.
Unmanned systems encounter challenging weather conditions during obstacle removal tasks. Researching stable, real-time, and accurate environmental perception methods under such conditions is crucial. Cameras and LiDAR sensors provide different and complementary data. However, the integration of disparate data presents challenges such as feature mismatches and the fusion of sparse and dense information, which can degrade algorithmic performance. Adverse weather conditions, like rain and snow, introduce noise that further reduces perception accuracy. To address these issues, we propose a novel weather-adaptive bird’s-eye view multi-level co-attention fusion 3D object detection algorithm (BEV-MCAF). This algorithm employs an improved feature extraction network to obtain more effective features. A multimodal feature fusion module has been constructed with BEV image feature generation and a co-attention mechanism for better fusion effects. A multi-scale multimodal joint domain adversarial network (M2-DANet) is proposed to enhance adaptability to adverse weather conditions. The efficacy of BEV-MCAF has been validated on both the nuScenes and Ithaca365 datasets, confirming its robustness and good generalization capability in a variety of bad weather conditions. The findings indicate that our proposed algorithm performs better than the benchmark, showing improved adaptability to harsh weather conditions and enhancing the robustness of UVs, ensuring reliable perception under challenging conditions. Full article
Show Figures

Figure 1

23 pages, 985 KiB  
Article
DBSF-Net: Infrared Image Colorization Based on the Generative Adversarial Model with Dual-Branch Feature Extraction and Spatial-Frequency-Domain Discrimination
by Shaopeng Li, Decao Ma, Yao Ding, Yong Xian and Tao Zhang
Remote Sens. 2024, 16(20), 3766; https://doi.org/10.3390/rs16203766 - 10 Oct 2024
Viewed by 1129
Abstract
Thermal infrared cameras can image stably in complex scenes such as night, rain, snow, and dense fog. Still, humans are more sensitive to visual colors, so there is an urgent need to convert infrared images into color images in areas such as assisted [...] Read more.
Thermal infrared cameras can image stably in complex scenes such as night, rain, snow, and dense fog. Still, humans are more sensitive to visual colors, so there is an urgent need to convert infrared images into color images in areas such as assisted driving. This paper studies a colorization method for infrared images based on a generative adversarial model. The proposed dual-branch feature extraction network ensures the stability of the content and structure of the generated visible light image; the proposed discrimination strategy combining spatial and frequency domain hybrid constraints effectively improves the problem of undersaturated coloring and the loss of texture details in the edge area of the generated visible light image. The comparative experiment of the public infrared visible light paired data set shows that the algorithm proposed in this paper has achieved the best performance in maintaining the consistency of the content structure of the generated image, restoring the image color distribution, and restoring the image texture details. Full article
Show Figures

Figure 1

22 pages, 1478 KiB  
Article
Assessing Greenhouse Gas Monitoring Capabilities Using SolAtmos End-to-End Simulator: Application to the Uvsq-Sat NG Mission
by Cannelle Clavier, Mustapha Meftah, Alain Sarkissian, Frédéric Romand, Odile Hembise Fanton d’Andon, Antoine Mangin, Slimane Bekki, Pierre-Richard Dahoo, Patrick Galopeau, Franck Lefèvre, Alain Hauchecorne and Philippe Keckhut
Remote Sens. 2024, 16(8), 1442; https://doi.org/10.3390/rs16081442 - 18 Apr 2024
Cited by 6 | Viewed by 1711
Abstract
Monitoring atmospheric concentrations of greenhouse gases (GHGs) like carbon dioxide and methane in near real time and with good spatial resolution is crucial for enhancing our understanding of the sources and sinks of these gases. A novel approach can be proposed using a [...] Read more.
Monitoring atmospheric concentrations of greenhouse gases (GHGs) like carbon dioxide and methane in near real time and with good spatial resolution is crucial for enhancing our understanding of the sources and sinks of these gases. A novel approach can be proposed using a constellation of small satellites equipped with miniaturized spectrometers having a spectral resolution of a few nanometers. The objective of this study is to describe expected results that can be obtained with a single satellite named Uvsq-Sat NG. The SolAtmos end-to-end simulator and its three tools (IRIS, OptiSpectra, and GHGRetrieval) were developed to evaluate the performance of the spectrometer of the Uvsq-Sat NG mission, which focuses on measuring the main GHGs. The IRIS tool was implemented to provide Top-Of-Atmosphere (TOA) spectral radiances. Four scenes were analyzed (pine forest, deciduous forest, ocean, snow) combined with different aerosol types (continental, desert, maritime, urban). Simulated radiance spectra were calculated based on the wavelength ranges of the Uvsq-Sat NG, which spans from 1200 to 2000 nm. The OptiSpectra tool was used to determine optimal observational settings for the spectrometer, including Signal-to-Noise Ratio (SNR) and integration time. Data derived from IRIS and OptiSpectra served as input for our GHGRetrieval simulation tool, developed to provide greenhouse gas concentrations. The Levenberg–Marquardt algorithm was applied iteratively to fine-tune gas concentrations and model inputs, aligning observed transmittance functions with simulated ones under given environmental conditions. To estimate gas concentrations (CO2, CH4, O2, H2O) and their uncertainties, the Monte Carlo method was used. Based on this analysis, this study demonstrates that a miniaturized spectrometer onboard Uvsq-Sat NG is capable of observing different scenes by adjusting its integration time according to the wavelength. The expected precision for each measurement is of the order of a few ppm for carbon dioxide and less than 25 ppb for methane. Full article
(This article belongs to the Special Issue Remote Sensing of Greenhouse Gas Emissions II)
Show Figures

Figure 1

14 pages, 3570 KiB  
Article
A Multi-Stage Progressive Network with Feature Transmission and Fusion for Marine Snow Removal
by Lixin Liu, Yuyang Liao and Bo He
Sensors 2024, 24(2), 356; https://doi.org/10.3390/s24020356 - 7 Jan 2024
Cited by 2 | Viewed by 1580
Abstract
Improving underwater image quality is crucial for marine detection applications. However, in the marine environment, captured images are often affected by various degradation factors due to the complexity of underwater conditions. In addition to common color distortions, marine snow noise in underwater images [...] Read more.
Improving underwater image quality is crucial for marine detection applications. However, in the marine environment, captured images are often affected by various degradation factors due to the complexity of underwater conditions. In addition to common color distortions, marine snow noise in underwater images is also a significant issue. The backscatter of artificial light on marine snow generates specks in images, thereby affecting image quality, scene perception, and subsequently impacting downstream tasks such as target detection and segmentation. Addressing the issues caused by marine snow noise, we have designed a new network structure. In this work, a novel skip-connection structure called a dual channel multi-scale feature transmitter (DCMFT) is implemented to reduce information loss during downsampling in the feature encoding and decoding section. Additionally, in the feature transfer process for each stage, iterative attentional feature fusion (iAFF) modules are inserted to fully utilize marine snow features extracted at different stages. Finally, to further optimize the network’s performance, we incorporate the multi-scale structural similarity index (MS-SSIM) into the loss function to ensure more effective convergence during training. Through experiments conducted on the Marine Snow Removal Benchmark (MSRB) dataset with an augmented sample size, our method has achieved significant results. The experimental results demonstrate that our approach excels in removing marine snow noise, with a peak signal-to-noise ratio reaching 38.9251 dB, significantly outperforming existing methods. Full article
Show Figures

Figure 1

19 pages, 10172 KiB  
Article
Reconstructing Snow Cover under Clouds and Cloud Shadows by Combining Sentinel-2 and Landsat 8 Images in a Mountainous Region
by Yanli Zhang, Changqing Ye, Ruirui Yang and Kegong Li
Remote Sens. 2024, 16(1), 188; https://doi.org/10.3390/rs16010188 - 2 Jan 2024
Cited by 6 | Viewed by 2744
Abstract
Snow cover is a sensitive indicator of global climate change, and optical images are an important means for monitoring its spatiotemporal changes. Due to the high reflectivity, rapid change, and intense spatial heterogeneity of mountainous snow cover, Sentinel-2 (S2) and Landsat 8 (L8) [...] Read more.
Snow cover is a sensitive indicator of global climate change, and optical images are an important means for monitoring its spatiotemporal changes. Due to the high reflectivity, rapid change, and intense spatial heterogeneity of mountainous snow cover, Sentinel-2 (S2) and Landsat 8 (L8) satellite imagery with both high spatial resolution and spectral resolution have become major data sources. However, optical sensors are more susceptible to cloud cover, and the two satellite images have significant spectral differences, making it challenging to obtain snow cover beneath clouds and cloud shadows (CCSs). Based on our previously published approach for snow reconstruction on S2 images using the Google Earth Engine (GEE), this study introduces two main innovations to reconstruct snow cover: (1) combining S2 and L8 images and choosing different CCS detection methods, and (2) improving the cloud shadow detection algorithm by considering land cover types, thus further improving the mountainous-snow-monitoring ability. The Babao River Basin of the Qilian Mountains in China is chosen as the study area; 399 scenes of S2 and 35 scenes of L8 are selected to analyze the spatiotemporal variations of snow cover from September 2019 to August 2022 in GEE. The results indicate that the snow reconstruction accuracies of both images are relatively high, and the overall accuracies for S2 and L8 are 80.74% and 88.81%, respectively. According to the time-series analysis of three hydrological years, it is found that there is a marked difference in the spatial distribution of snow cover in different hydrological years within the basin, with fluctuations observed overall. Full article
Show Figures

Figure 1

19 pages, 2402 KiB  
Article
Controllable Unsupervised Snow Synthesis by Latent Style Space Manipulation
by Hanting Yang, Alexander Carballo, Yuxiao Zhang and Kazuya Takeda
Sensors 2023, 23(20), 8398; https://doi.org/10.3390/s23208398 - 12 Oct 2023
Viewed by 1599
Abstract
In the field of intelligent vehicle technology, there is a high dependence on images captured under challenging conditions to develop robust perception algorithms. However, acquiring these images can be both time-consuming and dangerous. To address this issue, unpaired image-to-image translation models offer a [...] Read more.
In the field of intelligent vehicle technology, there is a high dependence on images captured under challenging conditions to develop robust perception algorithms. However, acquiring these images can be both time-consuming and dangerous. To address this issue, unpaired image-to-image translation models offer a solution by synthesizing samples of the desired domain, thus eliminating the reliance on ground truth supervision. However, the current methods predominantly focus on single projections rather than multiple solutions, not to mention controlling the direction of generation, which creates a scope for enhancement. In this study, we propose a generative adversarial network (GAN)–based model, which incorporates both a style encoder and a content encoder, specifically designed to extract relevant information from an image. Further, we employ a decoder to reconstruct an image using these encoded features, while ensuring that the generated output remains within a permissible range by applying a self-regression module to constrain the style latent space. By modifying the hyperparameters, we can generate controllable outputs with specific style codes. We evaluate the performance of our model by generating snow scenes on the Cityscapes and the EuroCity Persons datasets. The results reveal the effectiveness of our proposed methodology, thereby reinforcing the benefits of our approach in the ongoing evolution of intelligent vehicle technology. Full article
Show Figures

Figure 1

19 pages, 9232 KiB  
Article
Robust Visual Recognition in Poor Visibility Conditions: A Prior Knowledge-Guided Adversarial Learning Approach
by Jiangang Yang, Jianfei Yang, Luqing Luo, Yun Wang, Shizheng Wang and Jian Liu
Electronics 2023, 12(17), 3711; https://doi.org/10.3390/electronics12173711 - 2 Sep 2023
Cited by 2 | Viewed by 1936 | Correction
Abstract
Deep learning has achieved remarkable success in numerous computer vision tasks. However, recent research reveals that deep neural networks are vulnerable to natural perturbations from poor visibility conditions, limiting their practical applications. While several studies have focused on enhancing model robustness in poor [...] Read more.
Deep learning has achieved remarkable success in numerous computer vision tasks. However, recent research reveals that deep neural networks are vulnerable to natural perturbations from poor visibility conditions, limiting their practical applications. While several studies have focused on enhancing model robustness in poor visibility conditions through techniques such as image restoration, data augmentation, and unsupervised domain adaptation, these efforts are predominantly confined to specific scenarios and fail to address multiple poor visibility scenarios encountered in real-world settings. Furthermore, the valuable prior knowledge inherent in poor visibility images is seldom utilized to aid in resolving high-level computer vision tasks. In light of these challenges, we propose a novel deep learning paradigm designed to bolster the robustness of object recognition across diverse poor visibility scenes. By observing the prior information in diverse poor visibility scenes, we integrate a feature matching module based on this prior knowledge into our proposed learning paradigm, aiming to facilitate deep models in learning more robust generic features at shallow levels. Moreover, to further enhance the robustness of deep features, we employ an adversarial learning strategy based on mutual information. This strategy combines the feature matching module to extract task-specific representations from low visibility scenes in a more robust manner, thereby enhancing the robustness of object recognition. We evaluate our approach on self-constructed datasets containing diverse poor visibility scenes, including visual blur, fog, rain, snow, and low illuminance. Extensive experiments demonstrate that our proposed method yields significant improvements over existing solutions across various poor visibility conditions. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

18 pages, 7683 KiB  
Article
Use of Landsat Satellite Images in the Assessment of the Variability in Ice Cover on Polish Lakes
by Mariusz Sojka, Mariusz Ptak and Senlin Zhu
Remote Sens. 2023, 15(12), 3030; https://doi.org/10.3390/rs15123030 - 9 Jun 2023
Cited by 6 | Viewed by 1809
Abstract
Despite several decades of observations of ice cover in Polish lakes, researchers have not broadly applied satellite images to date. This paper presents a temporal and spatial analysis of the variability in the occurrence of ice cover on lakes in the Drawskie Lakeland [...] Read more.
Despite several decades of observations of ice cover in Polish lakes, researchers have not broadly applied satellite images to date. This paper presents a temporal and spatial analysis of the variability in the occurrence of ice cover on lakes in the Drawskie Lakeland in the hydrological years 1984–2022 based on satellite data from Landsat missions 4, 5, 7, 8, and 9. The range of occurrence of ice cover was determined based on the value of the Normalised Difference Snow Index (NDSI) and blue spectral band (ρλblue). The determination of ice cover extent adopted ρλblue  values from 0.033 to 0.120 as the threshold values. The analysis covered 67 lakes with an area from 0.07 to 18.71 km2. A total of 53 images were analysed, 14 and 39 out of which showed full and partial ice cover, respectively. The cluster analysis permitted the designation of two groups of lakes characterised by an approximate range of ice cover. The obtained results were analysed in the context of the morphometric parameters of the lakes. It was evidenced that the range of the ice cover on lakes is determined by the surface area of the lakes; their mean and maximum depth, volume, length, and width; and the height of the location above sea level. The results of analyses of the spatial range of ice cover in subsequent scenes allowed for the preparation of maps of probability of ice cover occurrence that permit the complete determination of its variability within each of the lakes. Monitoring of the spatial variability in ice cover within individual lakes as well as in reference to lakes not subject to traditional observations offers new research possibilities in many scientific disciplines focused on these ecosystems. Full article
(This article belongs to the Special Issue Remote Sensing of Environmental Changes in Cold Regions Ⅱ)
Show Figures

Graphical abstract

21 pages, 4732 KiB  
Article
Snow Cover Mapping Based on SNPP-VIIRS Day/Night Band: A Case Study in Xinjiang, China
by Baoying Chen, Xianfeng Zhang, Miao Ren, Xiao Chen and Junyi Cheng
Remote Sens. 2023, 15(12), 3004; https://doi.org/10.3390/rs15123004 - 8 Jun 2023
Cited by 3 | Viewed by 1948
Abstract
Detailed snow cover maps are essential for estimating the earth’s energy balance and hydrological cycle. Mapping the snow cover across spatially extensive and topographically complex areas with less or no cloud obscuration is challenging, but the SNPP-VIIRS Day/Night Band (DNB) nighttime light data [...] Read more.
Detailed snow cover maps are essential for estimating the earth’s energy balance and hydrological cycle. Mapping the snow cover across spatially extensive and topographically complex areas with less or no cloud obscuration is challenging, but the SNPP-VIIRS Day/Night Band (DNB) nighttime light data offers a potential solution. This paper aims to map snow cover distribution at 750 m resolution across the diverse 1,664,900 km2 of Xinjiang, China, based on SNPP-VIIRS DNB radiance. We implemented a swarm intelligent optimization technique Krill Herd algorithm, which finds the optimal threshold value by taking Otsu’s method as the objective function. We derived SNPP-VIIRS DNB snow maps of 14 consecutive scenes in December 2021, compared our snow-covered area estimations with those from MODIS and AMSR2 standard snow cover products, and generated composite snow maps by merging MODIS and SNPP-VIIRS DNB data. Results show that SNPP-VIIRS DNB snow maps are capable of providing reliable snow cover maps superior to MODIS and AMSR2, with an overall accuracy level of 84.66%. The composite snow maps at 500 m spatial resolution provided 55.85% more information on snow cover distribution than standard MODIS products and achieved an overall accuracy of 84.69%. Our study demonstrated the feasibility of snow cover detection in Xinjiang based on SNPP-VIIRS DNB, which can serve as a supplementary dataset for MODIS estimations where clouded pixels are present. Full article
(This article belongs to the Special Issue Remote Sensing of Night-Time Light II)
Show Figures

Graphical abstract

20 pages, 27650 KiB  
Article
STC-YOLO: Small Object Detection Network for Traffic Signs in Complex Environments
by Huaqing Lai, Liangyan Chen, Weihua Liu, Zi Yan and Sheng Ye
Sensors 2023, 23(11), 5307; https://doi.org/10.3390/s23115307 - 3 Jun 2023
Cited by 50 | Viewed by 5539
Abstract
The detection of traffic signs is easily affected by changes in the weather, partial occlusion, and light intensity, which increases the number of potential safety hazards in practical applications of autonomous driving. To address this issue, a new traffic sign dataset, namely the [...] Read more.
The detection of traffic signs is easily affected by changes in the weather, partial occlusion, and light intensity, which increases the number of potential safety hazards in practical applications of autonomous driving. To address this issue, a new traffic sign dataset, namely the enhanced Tsinghua-Tencent 100K (TT100K) dataset, was constructed, which includes the number of difficult samples generated using various data augmentation strategies such as fog, snow, noise, occlusion, and blur. Meanwhile, a small traffic sign detection network for complex environments based on the framework of YOLOv5 (STC-YOLO) was constructed to be suitable for complex scenes. In this network, the down-sampling multiple was adjusted, and a small object detection layer was adopted to obtain and transmit richer and more discriminative small object features. Then, a feature extraction module combining a convolutional neural network (CNN) and multi-head attention was designed to break the limitations of ordinary convolution extraction to obtain a larger receptive field. Finally, the normalized Gaussian Wasserstein distance (NWD) metric was introduced to make up for the sensitivity of the intersection over union (IoU) loss to the location deviation of tiny objects in the regression loss function. A more accurate size of the anchor boxes for small objects was achieved using the K-means++ clustering algorithm. Experiments on 45 types of sign detection results on the enhanced TT100K dataset showed that the STC-YOLO algorithm outperformed YOLOv5 by 9.3% in the mean average precision (mAP), and the performance of STC-YOLO was comparable with that of the state-of-the-art methods on the public TT100K dataset and CSUST Chinese Traffic Sign Detection Benchmark (CCTSDB2021) dataset. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

20 pages, 40396 KiB  
Article
Convolutional Neural Network-Driven Improvements in Global Cloud Detection for Landsat 8 and Transfer Learning on Sentinel-2 Imagery
by Shulin Pang, Lin Sun, Yanan Tian, Yutiao Ma and Jing Wei
Remote Sens. 2023, 15(6), 1706; https://doi.org/10.3390/rs15061706 - 22 Mar 2023
Cited by 15 | Viewed by 3800
Abstract
A stable and reliable cloud detection algorithm is an important step of optical satellite data preprocessing. Existing threshold methods are mostly based on classifying spectral features of isolated individual pixels and do not contain or incorporate the spatial information. This often leads to [...] Read more.
A stable and reliable cloud detection algorithm is an important step of optical satellite data preprocessing. Existing threshold methods are mostly based on classifying spectral features of isolated individual pixels and do not contain or incorporate the spatial information. This often leads to misclassifications of bright surfaces, such as human-made structures or snow/ice. Multi-temporal methods can alleviate this problem, but cloud-free images of the scene are difficult to obtain. To deal with this issue, we extended four deep-learning Convolutional Neural Network (CNN) models to improve the global cloud detection accuracy for Landsat imagery. The inputs are simplified as all discrete spectral channels from visible to short wave infrared wavelengths through radiometric calibration, and the United States Geological Survey (USGS) global Landsat 8 Biome cloud-cover assessment dataset is randomly divided for model training and validation independently. Experiments demonstrate that the cloud mask of the extended U-net model (i.e., UNmask) yields the best performance among all the models in estimating the cloud amounts (cloud amount difference, CAD = −0.35%) and capturing the cloud distributions (overall accuracy = 94.9%) for Landsat 8 imagery compared with the real validation masks; in particular, it runs fast and only takes about 41 ± 5.5 s for each scene. Our model can also actually detect broken and thin clouds over both dark and bright surfaces (e.g., urban and barren). Last, the UNmask model trained for Landsat 8 imagery is successfully applied in cloud detections for the Sentinel-2 imagery (overall accuracy = 90.1%) via transfer learning. These prove the great potential of our model in future applications such as remote sensing satellite data preprocessing. Full article
Show Figures

Figure 1

16 pages, 2731 KiB  
Article
Framework for Generation and Removal of Multiple Types of Adverse Weather from Driving Scene Images
by Hanting Yang, Alexander Carballo, Yuxiao Zhang and Kazuya Takeda
Sensors 2023, 23(3), 1548; https://doi.org/10.3390/s23031548 - 31 Jan 2023
Cited by 6 | Viewed by 3776
Abstract
Weather variation in the distribution of image data can cause a decline in the performance of existing visual algorithms during evaluation. Adding additional samples of target domain to training data or using pre-trained image restoration methods such as de-hazing, de-raining, and de-snowing, to [...] Read more.
Weather variation in the distribution of image data can cause a decline in the performance of existing visual algorithms during evaluation. Adding additional samples of target domain to training data or using pre-trained image restoration methods such as de-hazing, de-raining, and de-snowing, to improve the quality of input images are two promising solutions. In this work, we propose Multiple Weather Translation GAN (MWTG), a CycleGAN-based, dual-purpose framework that simultaneously learns weather generation and its removal from image data. MWTG consists of four GANs constrained using cycle consistency that carry out domain translation tasks between hazy, rainy, snowy, and clear weather, using an asymmetric approach. To increase network capacity, we employ a spatial feature transform (SFT) layer to fuse the features extracted from the weather layer, which contains high-level domain information from the previous generators. Further, we collect an unpaired, real-world driving dataset recorded under various weather conditions called Realistic Driving Scenes under Bad Weather (RDSBW). We qualitatively and quantitatively evaluate MWTG using the RDSBW and the variation of Cityscapes that synthesize weather effects, eg., FoggyCityscape. Our experimental results suggest that MWTG can generate realistic weather in clear images and also accurately remove noise from weather images. Furthermore, the SOTA pedestrian detector ASCP is shown to achieve an impressive gain in detection precision after image restoration using the proposed MWTG method. Full article
(This article belongs to the Topic Intelligent Transportation Systems)
Show Figures

Figure 1

Back to TopTop