Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (101)

Search Parameters:
Keywords = image shadow removal

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 5703 KiB  
Article
Document Image Shadow Removal Based on Illumination Correction Method
by Depeng Gao, Wenjie Liu, Shuxi Chen, Jianlin Qiu, Xiangxiang Mei and Bingshu Wang
Algorithms 2025, 18(8), 468; https://doi.org/10.3390/a18080468 - 26 Jul 2025
Viewed by 70
Abstract
Due to diverse lighting conditions and photo environments, shadows are almost ubiquitous in images, especially document images captured with mobile devices. Shadows not only seriously affect the visual quality and readability of a document but also significantly hinder image processing. Although shadow removal [...] Read more.
Due to diverse lighting conditions and photo environments, shadows are almost ubiquitous in images, especially document images captured with mobile devices. Shadows not only seriously affect the visual quality and readability of a document but also significantly hinder image processing. Although shadow removal research has achieved good results in natural scenes, specific studies on document images are lacking. To effectively remove shadows in document images, the dark illumination correction network is proposed, which mainly consists of two modules: shadow detection and illumination correction. First, a simplified shadow-corrected attention block is designed to combine spatial and channel attention, which is used to extract the features, detect the shadow mask, and correct the illumination. Then, the shadow detection block detects shadow intensity and outputs a soft shadow mask to determine the probability of each pixel belonging to shadow. Lastly, the illumination correction block corrects dark illumination with a soft shadow mask and outputs a shadow-free document image. Our experiments on five datasets show that the proposed method achieved state-of-the-art results, proving the effectiveness of illumination correction. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

13 pages, 5336 KiB  
Article
SnowMamba: Achieving More Precise Snow Removal with Mamba
by Guoqiang Wang, Yanyun Zhou, Fei Shi and Zhenhong Jia
Appl. Sci. 2025, 15(10), 5404; https://doi.org/10.3390/app15105404 - 12 May 2025
Viewed by 410
Abstract
Due to the diversity and semi-transparency of snowflakes, accurately locating and reconstructing background information during image restoration poses a significant challenge. Snowflakes obscure image details, thereby affecting downstream tasks such as object recognition and image segmentation. Although Convolutional Neural Networks (CNNs) and Transformers [...] Read more.
Due to the diversity and semi-transparency of snowflakes, accurately locating and reconstructing background information during image restoration poses a significant challenge. Snowflakes obscure image details, thereby affecting downstream tasks such as object recognition and image segmentation. Although Convolutional Neural Networks (CNNs) and Transformers have achieved promising results in snow removal through local or global feature processing, residual snowflakes or shadows persist in restored images. Inspired by the recent popularity of State Space Models (SSMs), this paper proposes a Mamba-based multi-scale desnowing network (SnowMamba), which effectively models the long-range dependencies of snowflakes. This enables the precise localization and removal of snow particles, addressing the issue of residual snowflakes and shadows in images. Specifically, we design a four-stage encoder–decoder network that incorporates Snow Caption Mamba (SCM) and SE modules to extract comprehensive snowflake and background information. The extracted multi-scale snow and background features are then fed into the proposed Multi-Scale Residual Interaction Network (MRNet) to learn and reconstruct clear, snow-free background images. Extensive experiments demonstrate that the proposed method outperforms other mainstream desnowing approaches in both qualitative and quantitative evaluations on three standard image desnowing datasets. Full article
Show Figures

Figure 1

19 pages, 19562 KiB  
Article
Inversion of Soil Moisture Content in Silage Corn Root Zones Based on UAV Remote Sensing
by Qihong Da, Jixuan Yan, Guang Li, Zichen Guo, Haolin Li, Wenning Wang, Jie Li, Weiwei Ma, Xuchun Li and Kejing Cheng
Agriculture 2025, 15(3), 331; https://doi.org/10.3390/agriculture15030331 - 2 Feb 2025
Cited by 2 | Viewed by 1337
Abstract
Accurately monitoring soil moisture content (SMC) in the field is crucial for achieving precision irrigation management. Currently, the development of UAV platforms provides a cost-effective method for large-scale SMC monitoring. This study investigates silage corn by employing UAV remote sensing technology to obtain [...] Read more.
Accurately monitoring soil moisture content (SMC) in the field is crucial for achieving precision irrigation management. Currently, the development of UAV platforms provides a cost-effective method for large-scale SMC monitoring. This study investigates silage corn by employing UAV remote sensing technology to obtain multispectral imagery during the seedling, jointing, and tasseling stages. Field experimental data were integrated, and supervised classification was used to remove soil background and image shadows. Canopy reflectance was extracted using masking techniques, while Pearson correlation analysis was conducted to assess the linear relationship strength between spectral indices and SMC. Subsequently, convolutional neural networks (CNNs), back-propagation neural networks (BPNNs), and partial least squares regression (PLSR) models were constructed to evaluate the applicability of these models in monitoring SMC before and after removing the soil background and image shadows. The results indicated that: (1) After removing the soil background and image shadows, the inversion accuracy of SMC for CNN, BPNN, and PLSR models improved at all growth stages. (2) Among the different inversion models, the accuracy from high to low was CNN, PLSR, BPNN. (3) From the perspective of different growth stages, the inversion accuracy from high to low was seedling stage, tasseling stage, jointing stage. The findings provide theoretical and technical support for UAV multispectral remote sensing inversion of SMC in silage corn root zones and offer validation for large-scale soil moisture monitoring using remote sensing. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

17 pages, 7746 KiB  
Article
Measurement of Seed Cotton Color Using RGB Imaging and Color-Unet
by Hao Li, Qingxu Li, Wanhuai Zhou, Ruoyu Zhang, Shicheng Hong, Mengyun Zhang and Zhiqiang Zhai
Agronomy 2025, 15(1), 19; https://doi.org/10.3390/agronomy15010019 - 26 Dec 2024
Viewed by 904
Abstract
Color is a key indicator in evaluating seed cotton quality. Accurate and rapid detection of seed cotton color is essential for its storage, processing, and trade. In this study, an RGB imaging and semantic segmentation-based method was proposed for seed cotton color detection. [...] Read more.
Color is a key indicator in evaluating seed cotton quality. Accurate and rapid detection of seed cotton color is essential for its storage, processing, and trade. In this study, an RGB imaging and semantic segmentation-based method was proposed for seed cotton color detection. First, a color detection system utilizing machine vision technology was developed to capture seed cotton images. Next, a Color-Unet model, incorporating convolutional block attention and improved inception E modules based on Unet, was applied to effectively remove impurities and shadows from the images, resolving the over-segmentation issue commonly encountered in traditional threshold segmentation. The results demonstrated that the pixel accuracy of segmentation reached 97.20%, the mean intersection over union was 91.81%, and the average segmentation speed was 322.3 ms per image. The Color-Unet model effectively addressed the over-segmentation problem. Subsequently, seed cotton color indexes were calculated using Hunter color formulas based on the segmented images. To evaluate the accuracy of color measurement obtained with the proposed method, a regression analysis was performed, comparing the results of those from the HX-410 measurement. The coefficient of determination of yellowness was 0.883, with a root mean square error of 0.150 and a mean relative error of 2.61%. The coefficient of determination of reflectance degree was 0.832, with a root mean square error of 1.56% and a mean relative error of 1.84%. The proposed method allows for the rapid and accurate assessment of seed cotton color from RGB images, providing a valuable technical reference for seed cotton color evaluation. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

31 pages, 3303 KiB  
Systematic Review
Deep Learning-Based Cloud Detection for Optical Remote Sensing Images: A Survey
by Zhengxin Wang, Longlong Zhao, Jintao Meng, Yu Han, Xiaoli Li, Ruixia Jiang, Jinsong Chen and Hongzhong Li
Remote Sens. 2024, 16(23), 4583; https://doi.org/10.3390/rs16234583 - 6 Dec 2024
Cited by 5 | Viewed by 3939 | Correction
Abstract
In optical remote sensing images, the presence of clouds affects the completeness of the ground observation and further affects the accuracy and efficiency of remote sensing applications. Especially in quantitative analysis, the impact of cloud cover on the reliability of analysis results cannot [...] Read more.
In optical remote sensing images, the presence of clouds affects the completeness of the ground observation and further affects the accuracy and efficiency of remote sensing applications. Especially in quantitative analysis, the impact of cloud cover on the reliability of analysis results cannot be ignored. Therefore, high-precision cloud detection is an important step in the preprocessing of optical remote sensing images. In the past decade, with the continuous progress of artificial intelligence, algorithms based on deep learning have become one of the main methods for cloud detection. The rapid development of deep learning technology, especially the introduction of self-attention Transformer models, has greatly improved the accuracy of cloud detection tasks while achieving efficient processing of large-scale remote sensing images. This review provides a comprehensive overview of cloud detection algorithms based on deep learning from the perspective of semantic segmentation, and elaborates on the research progress, advantages, and limitations of different categories in this field. In addition, this paper introduces the publicly available datasets and accuracy evaluation indicators for cloud detection, compares the accuracy of mainstream deep learning models in cloud detection, and briefly summarizes the subsequent processing steps of cloud shadow detection and removal. Finally, this paper analyzes the current challenges faced by existing deep learning-based cloud detection algorithms and the future development direction of the field. Full article
Show Figures

Graphical abstract

16 pages, 12399 KiB  
Article
Shadow Removal for Enhanced Nighttime Driving Scene Generation
by Heejun Yang, Oh-Hyeon Choung and Yuseok Ban
Appl. Sci. 2024, 14(23), 10999; https://doi.org/10.3390/app142310999 - 26 Nov 2024
Viewed by 1016
Abstract
Autonomous vehicles depend on robust vision systems capable of performing under diverse lighting conditions, yet existing models often exhibit substantial performance degradation when applied to nighttime scenarios after being trained exclusively on daytime data. This discrepancy arises from the lack of fine-grained details [...] Read more.
Autonomous vehicles depend on robust vision systems capable of performing under diverse lighting conditions, yet existing models often exhibit substantial performance degradation when applied to nighttime scenarios after being trained exclusively on daytime data. This discrepancy arises from the lack of fine-grained details that characterize nighttime environments, such as shadows and varying light intensities. To address this gap, we introduce a targeted approach to shadow removal designed for driving scenes. By applying Partitioned Shadow Removal, an enhanced technique that refines shadow-affected areas, alongside image-to-image translation, we generate realistic nighttime scenes from daytime data. Experimental results indicate that our augmented nighttime scenes significantly enhance segmentation accuracy in shadow-impacted regions, thereby increasing model robustness under low-light conditions. Our findings highlight the value of Partitioned Shadow Removal as a practical data augmentation tool, adapted to address the unique challenges of applying shadow removal in driving scenes, thereby paving the way for improved nighttime performance in autonomous vehicle vision systems. Full article
(This article belongs to the Special Issue Application of AI Technology in Intelligent Vehicles and Driving)
Show Figures

Figure 1

30 pages, 8057 KiB  
Article
Multi-Temporal Pixel-Based Compositing for Cloud Removal Based on Cloud Masks Developed Using Classification Techniques
by Tesfaye Adugna, Wenbo Xu, Jinlong Fan, Xin Luo and Haitao Jia
Remote Sens. 2024, 16(19), 3665; https://doi.org/10.3390/rs16193665 - 1 Oct 2024
Cited by 1 | Viewed by 2199
Abstract
Cloud is a serious problem that affects the quality of remote-sensing (RS) images. Existing cloud removal techniques suffer from notable limitations, such as being specific to certain data types, cloud conditions, and spatial extents, as well as requiring auxiliary data, which hampers their [...] Read more.
Cloud is a serious problem that affects the quality of remote-sensing (RS) images. Existing cloud removal techniques suffer from notable limitations, such as being specific to certain data types, cloud conditions, and spatial extents, as well as requiring auxiliary data, which hampers their generalizability and flexibility. To address the issue, we propose a maximum-value compositing approach by generating cloud masks. We acquired 432 daily MOD09GA L2 MODIS imageries covering a vast region with persistent cloud cover and various climates and land-cover types. Labeled datasets for cloud, land, and no-data were collected from selected daily imageries. Subsequently, we trained and evaluated RF, SVM, and U-Net models to choose the best models. Accordingly, SVM and U-Net were chosen and employed to classify all the daily imageries. Then, the classified imageries were converted to two sets of mask layers to mask clouds and no-data pixels in the corresponding daily images by setting the masked pixels’ values to −0.999999. After masking, we employed the maximum-value technique to generate two sets of 16-day composite products, MaxComp-1 and MaxComp-2, corresponding to SVM and U-Net-derived cloud masks, respectively. Finally, we assessed the quality of our composite products by comparing them with the reference MOD13A1 16-day composite product. Based on the land-cover classification accuracy, our products yielded a significantly higher accuracy (5–28%) than the reference MODIS product across three classifiers (RF, SVM, and U-Net), indicating the quality of our products and the effectiveness of our techniques. In particular, MaxComp-1 yielded the best results, which further implies the superiority of SVM for cloud masking. In addition, our products appear to be more radiometrically and spectrally consistent and less noisy than MOD13A1, implying that our approach is more efficient in removing shadows and noises/artifacts. Our method yields high-quality products that are vital for investigating large regions with persistent clouds and studies requiring time-series data. Moreover, the proposed techniques can be adopted for higher-resolution RS imageries, regardless of the spatial extent, data volume, and type of clouds. Full article
Show Figures

Figure 1

18 pages, 9929 KiB  
Article
Inversion of Cotton Soil and Plant Analytical Development Based on Unmanned Aerial Vehicle Multispectral Imagery and Mixed Pixel Decomposition
by Bingquan Tian, Hailin Yu, Shuailing Zhang, Xiaoli Wang, Lei Yang, Jingqian Li, Wenhao Cui, Zesheng Wang, Liqun Lu, Yubin Lan and Jing Zhao
Agriculture 2024, 14(9), 1452; https://doi.org/10.3390/agriculture14091452 - 25 Aug 2024
Cited by 6 | Viewed by 1586
Abstract
In order to improve the accuracy of multispectral image inversion of soil and plant analytical development (SPAD) of the cotton canopy, image segmentation methods were utilized to remove the background interference, such as soil and shadow in UAV multispectral images. UAV multispectral images [...] Read more.
In order to improve the accuracy of multispectral image inversion of soil and plant analytical development (SPAD) of the cotton canopy, image segmentation methods were utilized to remove the background interference, such as soil and shadow in UAV multispectral images. UAV multispectral images of cotton bud stage canopies at three different heights (30 m, 50 m, and 80 m) were acquired. Four methods, namely vegetation index thresholding (VIT), supervised classification by support vector machine (SVM), spectral mixture analysis (SMA), and multiple endmember spectral mixture analysis (MESMA), were used to segment cotton, soil, and shadows in the multispectral images of cotton. The segmented UAV multispectral images were used to extract the spectral information of the cotton canopy, and eight vegetation indices were calculated to construct the dataset. Partial least squares regression (PLSR), Random forest (FR), and support vector regression (SVR) algorithms were used to construct the inversion model of cotton SPAD. This study analyzed the effects of different image segmentation methods on the extraction accuracy of spectral information and the accuracy of SPAD modeling in the cotton canopy. The results showed that (1) The accuracy of spectral information extraction can be improved by removing background interference such as soil and shadows using four image segmentation methods. The correlation between the vegetation indices calculated from MESMA segmented images and the SPAD of the cotton canopy was improved the most; (2) At three different flight altitudes, the vegetation indices calculated by the MESMA segmentation method were used as the input variable, and the SVR model had the best accuracy in the inversion of cotton SPAD, with R2 of 0.810, 0.778, and 0.697, respectively; (3) At a flight altitude of 80 m, the R2 of the SVR models constructed using vegetation indices calculated from images segmented by VIT, SVM, SMA, and MESMA methods were improved by 2.2%, 5.8%, 13.7%, and 17.9%, respectively, compared to the original images. Therefore, the MESMA mixed pixel decomposition method can effectively remove soil and shadows in multispectral images, especially to provide a reference for improving the inversion accuracy of crop physiological parameters in low-resolution images with more mixed pixels. Full article
(This article belongs to the Special Issue Application of UAVs in Precision Agriculture—2nd Edition)
Show Figures

Figure 1

17 pages, 14587 KiB  
Article
Increasing the Beam Width and Intensity with Refraction Power Effect Using a Combination of Beam Mirrors and Concave Mirrors for Surgical-Fluorescence-Emission-Guided Cancer Monitoring Method
by Jina Park, Jeongmin Seo, Kicheol Yoon, Sangyun Lee, Minchan Kim, Seung Yeob Ryu and Kwang Gi Kim
Sensors 2024, 24(17), 5503; https://doi.org/10.3390/s24175503 - 25 Aug 2024
Viewed by 1371
Abstract
The primary goal during cancer removal surgery is to completely excise the malignant tumor. Because the color of the tumor and surrounding tissues is very similar, it is difficult to observe with the naked eye, posing a risk of damaging surrounding blood vessels [...] Read more.
The primary goal during cancer removal surgery is to completely excise the malignant tumor. Because the color of the tumor and surrounding tissues is very similar, it is difficult to observe with the naked eye, posing a risk of damaging surrounding blood vessels during the tumor removal process. Therefore, fluorescence emission is induced using a fluorescent contrast agent, and color classification is monitored through camera imaging. LEDs must be irradiated to generate the fluorescent emission electromotive force. However, the power and beam width of the LED are insufficient to generate this force effectively, so the beam width and intensity must be increased to irradiate the entire lesion. Additionally, there should be no shaded areas in the beam irradiation range. This paper proposes a method to enhance the beam width and intensity while eliminating shadow areas. A total reflection beam mirror was used to increase beam width and intensity. However, when the beam width increased, a shadow area appeared at the edge, limiting irradiation of the entire lesion. To compensate for this shadow area, a concave lens was combined with the beam mirror, resulting in an increase in beam width and intensity by more than 1.42 times and 18.6 times, respectively. Consequently, the beam width reached 111.8°, and the beam power was 13.6 mW. The proposed method is expected to be useful for observing tumors through the induction of fluorescence emission during cancer removal surgery or for pathological examination in the pathology department. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

21 pages, 20756 KiB  
Article
A Novel Method for Cloud and Cloud Shadow Detection Based on the Maximum and Minimum Values of Sentinel-2 Time Series Images
by Kewen Liang, Gang Yang, Yangyan Zuo, Jiahui Chen, Weiwei Sun, Xiangchao Meng and Binjie Chen
Remote Sens. 2024, 16(8), 1392; https://doi.org/10.3390/rs16081392 - 15 Apr 2024
Cited by 8 | Viewed by 4117
Abstract
Automatic and accurate detection of clouds and cloud shadows is a critical aspect of optical remote sensing image preprocessing. This paper provides a time series maximum and minimum mask method (TSMM) for cloud and cloud shadow detection. Firstly, the Cloud Score+S2_HARMONIZED (CS+S2) is [...] Read more.
Automatic and accurate detection of clouds and cloud shadows is a critical aspect of optical remote sensing image preprocessing. This paper provides a time series maximum and minimum mask method (TSMM) for cloud and cloud shadow detection. Firstly, the Cloud Score+S2_HARMONIZED (CS+S2) is employed as a preliminary mask for clouds and cloud shadows. Secondly, we calculate the ratio of the maximum and sub-maximum values of the blue band in the time series, as well as the ratio of the minimum and sub-minimum values of the near-infrared band in the time series, to eliminate noise from the time series data. Finally, the maximum value of the clear blue band and the minimum value of the near-infrared band after noise removal are employed for cloud and cloud shadow detection, respectively. A national and a global dataset were used to validate the TSMM, and it was quantitatively compared against five other advanced methods or products. When clouds and cloud shadows are detected simultaneously, in the S2ccs dataset, the overall accuracy (OA) reaches 0.93 and the F1 score reaches 0.85. Compared with the most advanced CS+S2, there are increases of 3% and 9%, respectively. In the CloudSEN12 dataset, compared with CS+S2, the producer’s accuracy (PA) and F1 score show increases of 10% and 4%, respectively. Additionally, when applied to Landsat-8 images, TSMM outperforms Fmask, demonstrating its strong generalization capability. Full article
(This article belongs to the Special Issue Satellite-Based Cloud Climatologies)
Show Figures

Graphical abstract

24 pages, 1764 KiB  
Article
Hyperspectral Image Shadow Enhancement Using Three-Dimensional Dynamic Stochastic Resonance and Classification Based on ResNet
by Xuefeng Liu, Yangyang Kou and Min Fu
Electronics 2024, 13(3), 500; https://doi.org/10.3390/electronics13030500 - 24 Jan 2024
Cited by 2 | Viewed by 1329
Abstract
Classification is an important means of extracting rich information from hyperspectral images (HSIs). However, many HSIs contain shadowed areas, where noise severely affects the extraction of useful information. General noise removal may lead to loss of spatial correlation and spectral features. In contrast, [...] Read more.
Classification is an important means of extracting rich information from hyperspectral images (HSIs). However, many HSIs contain shadowed areas, where noise severely affects the extraction of useful information. General noise removal may lead to loss of spatial correlation and spectral features. In contrast, dynamic stochastic resonance (DSR) converts noise into capability that enhances the signal in a way that better preserves the image’s original information. Nevertheless, current one-dimensional and 2D DSR methods fail to fully utilize the tensor properties of hyperspectral data and preserve the complete spectral features. Therefore, a hexa-directional differential format is derived in this paper to solve the system’s output, and the iterative equation for HSI shadow enhancement is obtained, enabling 3D parallel processing of HSI spatial–spectral information. Meanwhile, internal parameters are adjusted to achieve optimal resonance. Furthermore, the residual neural network 152 model embedded with the convolutional block attention module is proposed to diminish information redundancy and leverage data concealed within shadow areas. Experimental results on a real-world HSI demonstrate the potential performance of 3D DSR in enhancing weak signals in HSI shadow regions and the proposed approach’s effectiveness in improving classification. Full article
(This article belongs to the Topic Hyperspectral Imaging and Signal Processing)
Show Figures

Figure 1

19 pages, 20196 KiB  
Article
Synthetic Document Images with Diverse Shadows for Deep Shadow Removal Networks
by Yuhi Matsuo and Yoshimitsu Aoki
Sensors 2024, 24(2), 654; https://doi.org/10.3390/s24020654 - 19 Jan 2024
Cited by 4 | Viewed by 2887
Abstract
Shadow removal for document images is an essential task for digitized document applications. Recent shadow removal models have been trained on pairs of shadow images and shadow-free images. However, obtaining a large, diverse dataset for document shadow removal takes time and effort. Thus, [...] Read more.
Shadow removal for document images is an essential task for digitized document applications. Recent shadow removal models have been trained on pairs of shadow images and shadow-free images. However, obtaining a large, diverse dataset for document shadow removal takes time and effort. Thus, only small real datasets are available. Graphic renderers have been used to synthesize shadows to create relatively large datasets. However, the limited number of unique documents and the limited lighting environments adversely affect the network performance. This paper presents a large-scale, diverse dataset called the Synthetic Document with Diverse Shadows (SynDocDS) dataset. The SynDocDS comprises rendered images with diverse shadows augmented by a physics-based illumination model, which can be utilized to obtain a more robust and high-performance deep shadow removal network. In this paper, we further propose a Dual Shadow Fusion Network (DSFN). Unlike natural images, document images often have constant background colors requiring a high understanding of global color features for training a deep shadow removal network. The DSFN has a high global color comprehension and understanding of shadow regions and merges shadow attentions and features efficiently. We conduct experiments on three publicly available datasets, the OSR, Kligler’s, and Jung’s datasets, to validate our proposed method’s effectiveness. In comparison to training on existing synthetic datasets, our model training on the SynDocDS dataset achieves an enhancement in the PSNR and SSIM, increasing them from 23.00 dB to 25.70 dB and 0.959 to 0.971 on average. In addition, the experiments demonstrated that our DSFN clearly outperformed other networks across multiple metrics, including the PSNR, the SSIM, and its impact on OCR performance. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

20 pages, 5028 KiB  
Article
Automatic Detection Method for Black Smoke Vehicles Considering Motion Shadows
by Han Wang, Ke Chen and Yanfeng Li
Sensors 2023, 23(19), 8281; https://doi.org/10.3390/s23198281 - 6 Oct 2023
Cited by 5 | Viewed by 1982
Abstract
Various statistical data indicate that mobile source pollutants have become a significant contributor to atmospheric environmental pollution, with vehicle tailpipe emissions being the primary contributor to these mobile source pollutants. The motion shadow generated by motor vehicles bears a visual resemblance to emitted [...] Read more.
Various statistical data indicate that mobile source pollutants have become a significant contributor to atmospheric environmental pollution, with vehicle tailpipe emissions being the primary contributor to these mobile source pollutants. The motion shadow generated by motor vehicles bears a visual resemblance to emitted black smoke, making this study primarily focused on the interference of motion shadows in the detection of black smoke vehicles. Initially, the YOLOv5s model is used to locate moving objects, including motor vehicles, motion shadows, and black smoke emissions. The extracted images of these moving objects are then processed using simple linear iterative clustering to obtain superpixel images of the three categories for model training. Finally, these superpixel images are fed into a lightweight MobileNetv3 network to build a black smoke vehicle detection model for recognition and classification. This study breaks away from the traditional approach of “detection first, then removal” to overcome shadow interference and instead employs a “segmentation-classification” approach, ingeniously addressing the coexistence of motion shadows and black smoke emissions. Experimental results show that the Y-MobileNetv3 model, which takes motion shadows into account, achieves an accuracy rate of 95.17%, a 4.73% improvement compared with the N-MobileNetv3 model (which does not consider motion shadows). Moreover, the average single-image inference time is only 7.3 ms. The superpixel segmentation algorithm effectively clusters similar pixels, facilitating the detection of trace amounts of black smoke emissions from motor vehicles. The Y-MobileNetv3 model not only improves the accuracy of black smoke vehicle recognition but also meets the real-time detection requirements. Full article
(This article belongs to the Special Issue Computer Vision Sensing and Pattern Recognition)
Show Figures

Figure 1

21 pages, 5310 KiB  
Article
Supraglacial Lake Evolution over Northeast Greenland Using Deep Learning Methods
by Katrina Lutz, Zahra Bahrami and Matthias Braun
Remote Sens. 2023, 15(17), 4360; https://doi.org/10.3390/rs15174360 - 4 Sep 2023
Cited by 10 | Viewed by 2720
Abstract
Supraglacial lakes in Greenland are highly dynamic hydrological features in which glacial meltwater cumulates, allowing for the loss and transport of freshwater from a glacial surface to the ocean or a nearby waterbody. Standard supraglacial lake monitoring techniques, specifically image segmentation, rely heavily [...] Read more.
Supraglacial lakes in Greenland are highly dynamic hydrological features in which glacial meltwater cumulates, allowing for the loss and transport of freshwater from a glacial surface to the ocean or a nearby waterbody. Standard supraglacial lake monitoring techniques, specifically image segmentation, rely heavily on a series of region-dependent thresholds, limiting the adaptability of the algorithm to different illumination and surface variations, while being susceptible to the inclusion of false positives such as shadows. In this study, a supraglacial lake segmentation algorithm is developed for Sentinel-2 images based on a deep learning architecture (U-Net) to evaluate the suitability of artificial intelligence techniques in this domain. Additionally, a deep learning-based cloud segmentation tool developed specifically for polar regions is implemented in the processing chain to remove cloudy imagery from the analysis. Using this technique, a time series of supraglacial lake development is created for the 2016 to 2022 melt seasons over Nioghalvfjerdsbræ (79°N Glacier) and Zachariæ Isstrøm in Northeast Greenland, an area that covers 26,302 km2 and represents roughly 10% of the Northeast Greenland Ice Stream. The total lake area was found to have a strong interannual variability, with the largest peak lake area of 380 km2 in 2019 and the smallest peak lake area of 67 km2 in 2018. These results were then compared against an algorithm based on a thresholding technique to evaluate the agreement of the methodologies. The deep learning-based time series shows a similar trend to that produced by a previously published thresholding technique, while being smoother and more encompassing of meltwater in higher-melt periods. Additionally, while not completely eliminating them, the deep learning model significantly reduces the inclusion of shadows as false positives. Overall, the use of deep learning on multispectral images for the purpose of supraglacial lake segmentation proves to be advantageous. Full article
Show Figures

Graphical abstract

23 pages, 387197 KiB  
Article
A Flexible Spatiotemporal Thick Cloud Removal Method with Low Requirements for Reference Images
by Yu Zhang, Luyan Ji, Xunpeng Xu, Peng Zhang, Kang Jiang and Hairong Tang
Remote Sens. 2023, 15(17), 4306; https://doi.org/10.3390/rs15174306 - 31 Aug 2023
Cited by 5 | Viewed by 1717
Abstract
Thick cloud and shadows have a significant impact on the availability of optical remote sensing data. Although various methods have been proposed to address this issue, they still have some limitations. First, most approaches rely on a single clear reference image as complementary [...] Read more.
Thick cloud and shadows have a significant impact on the availability of optical remote sensing data. Although various methods have been proposed to address this issue, they still have some limitations. First, most approaches rely on a single clear reference image as complementary information, which becomes challenging when the target image has large missing areas. Secondly, the existing methods that can utilize multiple reference images require the complementary data to have high temporal correlation, which is not suitable for situations where the difference between the reference image and the target image is large. To overcome these limitations, a flexible spatiotemporal deep learning framework based on generative adversarial networks is proposed for thick cloud removal, which allows for the use of three arbitrary temporal images as references. The framework incorporates a three-step encoder that can leverage the uncontaminated information from the target image to assimilate the reference images, enhancing the model’s ability to handle reference images with diverse temporal differences. A series of simulated and real experiments on Landsat 8 and Sentinel 2 data is performed to demonstrate the effectiveness of the proposed method. The proposed method is especially applicable to small/large-scale regions with reference images that are significantly different from the target image. Full article
Show Figures

Figure 1

Back to TopTop