Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (84)

Search Parameters:
Keywords = superpixel-to-superpixel similarity

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 4595 KiB  
Article
Weakly Supervised Semantic Segmentation of Remote Sensing Images Using Siamese Affinity Network
by Zheng Chen, Yuheng Lian, Jing Bai, Jingsen Zhang, Zhu Xiao and Biao Hou
Remote Sens. 2025, 17(5), 808; https://doi.org/10.3390/rs17050808 - 25 Feb 2025
Cited by 2 | Viewed by 1727
Abstract
In recent years, weakly supervised semantic segmentation (WSSS) has garnered significant attention in remote sensing image analysis due to its low annotation cost. To address the issues of inaccurate and incomplete seed areas and unreliable pseudo masks in WSSS, we propose a novel [...] Read more.
In recent years, weakly supervised semantic segmentation (WSSS) has garnered significant attention in remote sensing image analysis due to its low annotation cost. To address the issues of inaccurate and incomplete seed areas and unreliable pseudo masks in WSSS, we propose a novel WSSS method for remote sensing images based on the Siamese Affinity Network (SAN) and the Segment Anything Model (SAM). First, we design a seed enhancement module for semantic affinity, which strengthens contextual relevance in the feature map by enforcing a unified constraint principle of cross-pixel similarity, thereby capturing semantically similar regions within the image. Second, leveraging the prior notion of cross-view consistency, we employ a Siamese network to regularize the consistency of CAMs from different affine-transformed images, providing additional supervision for weakly supervised learning. Finally, we utilize the SAM segmentation model to generate semantic superpixels, expanding the original CAM seeds to more completely and accurately extract target edges, thereby improving the quality of segmentation pseudo masks. Experimental results on the large-scale remote sensing datasets DRLSD and ISPRS Vaihingen demonstrate that our method achieves segmentation performance close to that of fully supervised semantic segmentation (FSSS) methods on both datasets. Ablation studies further verify the positive optimization effect of each module on segmentation pseudo labels. Our approach exhibits superior localization accuracy and precise visualization effects across different backbone networks, achieving state-of-the-art localization performance. Full article
Show Figures

Figure 1

26 pages, 394 KiB  
Review
Monitoring Yield and Quality of Forages and Grassland in the View of Precision Agriculture Applications—A Review
by Abid Ali and Hans-Peter Kaul
Remote Sens. 2025, 17(2), 279; https://doi.org/10.3390/rs17020279 - 15 Jan 2025
Cited by 7 | Viewed by 3036
Abstract
The potential of precision agriculture (PA) in forage and grassland management should be more extensively exploited to meet the increasing global food demand on a sustainable basis. Monitoring biomass yield and quality traits directly impacts the fertilization and irrigation practises and frequency of [...] Read more.
The potential of precision agriculture (PA) in forage and grassland management should be more extensively exploited to meet the increasing global food demand on a sustainable basis. Monitoring biomass yield and quality traits directly impacts the fertilization and irrigation practises and frequency of utilization (cuts) in grasslands. Therefore, the main goal of the review is to examine the techniques for using PA applications to monitor productivity and quality in forage and grasslands. To achieve this, the authors discuss several monitoring technologies for biomass and plant stand characteristics (including quality) that make it possible to adopt digital farming in forages and grassland management. The review provides an overview about mass flow and impact sensors, moisture sensors, remote sensing-based approaches, near-infrared (NIR) spectroscopy, and mapping field heterogeneity and promotes decision support systems (DSSs) in this field. At a small scale, advanced sensors such as optical, thermal, and radar sensors mountable on drones; LiDAR (Light Detection and Ranging); and hyperspectral imaging techniques can be used for assessing plant and soil characteristics. At a larger scale, we discuss coupling of remote sensing with weather data (synergistic grassland yield modelling), Sentinel-2 data with radiative transfer modelling (RTM), Sentinel-1 backscatter, and Catboost–machine learning methods for digital mapping in terms of precision harvesting and site-specific farming decisions. It is known that the delineation of sward heterogeneity is more difficult in mixed grasslands due to spectral similarity among species. Thanks to Diversity-Interactions models, jointly assessing various species interactions under mixed grasslands is allowed. Further, understanding such complex sward heterogeneity might be feasible by integrating spectral un-mixing techniques such as the super-pixel segmentation technique, multi-level fusion procedure, and combined NIR spectroscopy with neural network models. This review offers a digital option for enhancing yield monitoring systems and implementing PA applications in forages and grassland management. The authors recommend a future research direction for the inclusion of costs and economic returns of digital technologies for precision grasslands and fodder production. Full article
Show Figures

Graphical abstract

18 pages, 7661 KiB  
Article
Rapid Water Quality Mapping from Imaging Spectroscopy with a Superpixel Approach to Bio-Optical Inversion
by Nicholas R. Vaughn, Marcel König, Kelly L. Hondula, Dominica E. Harrison and Gregory P. Asner
Remote Sens. 2024, 16(23), 4344; https://doi.org/10.3390/rs16234344 - 21 Nov 2024
Viewed by 890
Abstract
High-resolution water quality maps derived from imaging spectroscopy provide valuable insights for environmental monitoring and management, but the processing of all pixels of large datasets is extremely computationally intensive and limits the speed of map production. We demonstrate a superpixel approach to accelerating [...] Read more.
High-resolution water quality maps derived from imaging spectroscopy provide valuable insights for environmental monitoring and management, but the processing of all pixels of large datasets is extremely computationally intensive and limits the speed of map production. We demonstrate a superpixel approach to accelerating water quality parameter inversion on such data to considerably reduce time and resource needs. Neighboring pixels were clustered into spectrally similar superpixels, and bio-optical inversions were performed at the superpixel level before a nearest-neighbor interpolation of the results back to pixel resolution. We tested the approach on five example airborne imaging spectroscopy datasets from Hawaiian coastal waters, comparing outputs to pixel-by-pixel inversions for three water quality parameters: suspended particulate matter, chlorophyll-a, and colored dissolved organic matter. We found significant reduction in computational time, ranging from 38 to 2625 times faster processing for superpixel sizes of 50 to 5000 pixels (200 to 20,000 m2). Using 1000 paired output values from each example image, we found minimal reduction in accuracy (as decrease in R2 or increase in RMSE) of the model results when the superpixel size was less than 750 2 m × 2 m resolution pixels. Such results mean that this methodology could reduce the time needed to produce regional- or global-scale maps and thereby allow environmental managers and other stakeholders to more rapidly understand and respond to changing water quality conditions. Full article
(This article belongs to the Special Issue Remote Sensing of Aquatic Ecosystem Monitoring)
Show Figures

Figure 1

22 pages, 16745 KiB  
Article
Unsupervised PolSAR Image Classification Based on Superpixel Pseudo-Labels and a Similarity-Matching Network
by Lei Wang, Lingmu Peng, Rong Gui, Hanyu Hong and Shenghui Zhu
Remote Sens. 2024, 16(21), 4119; https://doi.org/10.3390/rs16214119 - 4 Nov 2024
Cited by 1 | Viewed by 1536
Abstract
Supervised polarimetric synthetic aperture radar (PolSAR) image classification demands a large amount of precisely labeled data. However, such data are difficult to obtain. Therefore, many unsupervised methods have been proposed for unsupervised PolSAR image classification. The classification maps of unsupervised methods contain many [...] Read more.
Supervised polarimetric synthetic aperture radar (PolSAR) image classification demands a large amount of precisely labeled data. However, such data are difficult to obtain. Therefore, many unsupervised methods have been proposed for unsupervised PolSAR image classification. The classification maps of unsupervised methods contain many high-confidence samples. These samples, which are often ignored, can be used as supervisory information to improve classification performance on PolSAR images. This study proposes a new unsupervised PolSAR image classification framework. The framework combines high-confidence superpixel pseudo-labeled samples and semi-supervised classification methods. The experiments indicated that this framework could achieve higher-level effectiveness in unsupervised PolSAR image classification. First, superpixel segmentation was performed on PolSAR images, and the geometric centers of the superpixels were generated. Second, the classification maps of rotation-domain deep mutual information (RDDMI), an unsupervised PolSAR image classification method, were used as the pseudo-labels of the central points of the superpixels. Finally, the unlabeled samples and the high-confidence pseudo-labeled samples were used to train an excellent semi-supervised method, similarity matching (SimMatch). Experiments on three real PolSAR datasets illustrated that, compared with the excellent RDDMI, the accuracy of the proposed method was increased by 1.70%, 0.99%, and 0.8%. The proposed framework provides significant performance improvements and is an efficient method for improving unsupervised PolSAR image classification. Full article
(This article belongs to the Special Issue SAR in Big Data Era III)
Show Figures

Figure 1

15 pages, 14691 KiB  
Article
Semantic Aware Stitching for Panorama
by Yuan Jia, Zhongyao Li, Lei Zhang, Bin Song and Rui Song
Sensors 2024, 24(11), 3512; https://doi.org/10.3390/s24113512 - 29 May 2024
Cited by 1 | Viewed by 1452
Abstract
The most critical aspect of panorama generation is maintaining local semantic consistency. Objects may be projected from different depths in the captured image. When warping the image to a unified canvas, pixels at the semantic boundaries of the different views are significantly misaligned. [...] Read more.
The most critical aspect of panorama generation is maintaining local semantic consistency. Objects may be projected from different depths in the captured image. When warping the image to a unified canvas, pixels at the semantic boundaries of the different views are significantly misaligned. We propose two lightweight strategies to address this challenge efficiently. First, the original image is segmented as superpixels rather than regular grids to preserve the structure of each cell. We propose effective cost functions to generate the warp matrix for each superpixel. The warp matrix varies progressively for smooth projection, which contributes to a more faithful reconstruction of object structures. Second, to deal with artifacts introduced by stitching, we use a seam line method tailored to superpixels. The algorithm takes into account the feature similarity of neighborhood superpixels, including color difference, structure and entropy. We also consider the semantic information to avoid semantic misalignment. The optimal solution constrained by the cost functions is obtained under a graph model. The resulting stitched images exhibit improved naturalness. Extensive testing on common panorama stitching datasets is performed on the algorithm. Experimental results show that the proposed algorithm effectively mitigates artifacts, preserves the completeness of semantics and produces panoramic images with a subjective quality that is superior to that of alternative methods. Full article
(This article belongs to the Special Issue Digital Image Processing and Sensing Technologies)
Show Figures

Figure 1

23 pages, 22134 KiB  
Article
Multiobjective Evolutionary Superpixel Segmentation for PolSAR Image Classification
by Boce Chu, Mengxuan Zhang, Kun Ma, Long Liu, Junwei Wan, Jinyong Chen, Jie Chen and Hongcheng Zeng
Remote Sens. 2024, 16(5), 854; https://doi.org/10.3390/rs16050854 - 29 Feb 2024
Cited by 1 | Viewed by 1435
Abstract
Superpixel segmentation has been widely used in the field of computer vision. The generations of PolSAR superpixels have also been widely studied for their feasibility and high efficiency. The initial numbers of PolSAR superpixels are usually designed manually by experience, which has a [...] Read more.
Superpixel segmentation has been widely used in the field of computer vision. The generations of PolSAR superpixels have also been widely studied for their feasibility and high efficiency. The initial numbers of PolSAR superpixels are usually designed manually by experience, which has a significant impact on the final performance of superpixel segmentation and the subsequent interpretation tasks. Additionally, the effective information of PolSAR superpixels is not fully analyzed and utilized in the generation process. Regarding these issues, a multiobjective evolutionary superpixel segmentation for PolSAR image classification is proposed in this study. It contains two layers, an automatic optimization layer and a fine segmentation layer. Fully considering the similarity information within the superpixels and the difference information among the superpixels simultaneously, the automatic optimization layer can determine the suitable number of superpixels automatically by the multiobjective optimization for PolSAR superpixel segmentation. Considering the difficulty of the search for accurate boundaries of complex ground objects in PolSAR images, the fine segmentation layer can further improve the qualities of superpixels by fully using the boundary information of good-quality superpixels in the evolution process for generating PolSAR superpixels. The experiments on different PolSAR image datasets validate that the proposed approach can automatically generate high-quality superpixels without any prior information. Full article
Show Figures

Figure 1

17 pages, 11999 KiB  
Article
Edge-Bound Change Detection in Multisource Remote Sensing Images
by Zhijuan Su, Gang Wan, Wenhua Zhang, Zhanji Wei, Yitian Wu, Jia Liu, Yutong Jia, Dianwei Cong and Lihuan Yuan
Electronics 2024, 13(5), 867; https://doi.org/10.3390/electronics13050867 - 23 Feb 2024
Cited by 4 | Viewed by 1870
Abstract
Detecting changes in multisource heterogeneous images is a great challenge for unsupervised change detection methods. Image-translation-based methods, which transform two images to be homogeneous for comparison, have become a mainstream approach. However, most of them primarily rely on information from unchanged regions, resulting [...] Read more.
Detecting changes in multisource heterogeneous images is a great challenge for unsupervised change detection methods. Image-translation-based methods, which transform two images to be homogeneous for comparison, have become a mainstream approach. However, most of them primarily rely on information from unchanged regions, resulting in networks that cannot fully capture the connection between two heterogeneous representations. Moreover, the lack of a priori information and sufficient training data makes the training vulnerable to the interference of changed pixels. In this paper, we propose an edge-oriented generative adversarial network (EO-GAN) for change detection that indirectly translates images using edge information, which serves as a core and stable link between heterogeneous representations. The EO-GAN is composed of an edge extraction network and a reconstructive network. During the training process, we ensure that the edges extracted from heterogeneous images are as similar as possible through supplemented data based on superpixel segmentation. Experimental results on both heterogeneous and homogeneous datasets demonstrate the effectiveness of our proposed method. Full article
Show Figures

Figure 1

15 pages, 17329 KiB  
Article
Enhanced Atrous Extractor and Self-Dynamic Gate Network for Superpixel Segmentation
by Bing Liu, Zhaohao Zhong, Tongye Hu and Hongwei Zhao
Appl. Sci. 2023, 13(24), 13109; https://doi.org/10.3390/app132413109 - 8 Dec 2023
Viewed by 1185
Abstract
A superpixel is a group of pixels with similar low-level and mid-level properties, which can be seen as a basic unit in the pre-processing of remote sensing images. Therefore, superpixel segmentation can reduce the computation cost largely. However, all the deep-learning-based methods still [...] Read more.
A superpixel is a group of pixels with similar low-level and mid-level properties, which can be seen as a basic unit in the pre-processing of remote sensing images. Therefore, superpixel segmentation can reduce the computation cost largely. However, all the deep-learning-based methods still suffer from the under-segmentation and low compactness problem of remote sensing images. To fix the problem, we propose EAGNet, an enhanced atrous extractor and self-dynamic gate network. The enhanced atrous extractor is used to extract the multi-scale superpixel feature with contextual information. The multi-scale superpixel feature with contextual information can solve the low compactness effectively. The self-dynamic gate network introduces the gating and dynamic mechanisms to inject detailed information, which solves the under-segmentation effectively. Massive experiments have shown that our EAGNet can achieve the state-of-the-art performance between k-means and deep-learning-based methods. Our methods achieved 97.61 in ASA and 18.85 in CO on the BSDS500. Furthermore, we also conduct the experiment on the remote sensing dataset to show the generalization of our EAGNet in remote sensing fields. Full article
(This article belongs to the Special Issue Deep Learning in Satellite Remote Sensing Applications)
Show Figures

Figure 1

13 pages, 4440 KiB  
Article
Detecting Structural Changes in the Choroidal Layer of the Eye in Neurodegenerative Disease Patients through Optical Coherence Tomography Image Processing
by Sofia Otin, Francisco J. Ávila, Victor Mallen and Elena Garcia-Martin
Biomedicines 2023, 11(11), 2986; https://doi.org/10.3390/biomedicines11112986 - 7 Nov 2023
Cited by 4 | Viewed by 1793
Abstract
Purpose: To evaluate alterations of the choroid in patients with a neurodegenerative disease versus healthy controls, a custom algorithm based on superpixel segmentation was used. Design: A cross-sectional study was conducted on data obtained in a previous cohort study. Subjects: Swept-source optical coherence [...] Read more.
Purpose: To evaluate alterations of the choroid in patients with a neurodegenerative disease versus healthy controls, a custom algorithm based on superpixel segmentation was used. Design: A cross-sectional study was conducted on data obtained in a previous cohort study. Subjects: Swept-source optical coherence tomography (OCT) B-scan images obtained using a Triton (Topcon, Japan) device were compiled according to current OSCAR IB and APOSTEL OCT image quality criteria. Images were included from three cohorts: multiple sclerosis (MS) patients, Parkinson disease (PD) patients, and healthy subjects. Only patients with early-stage MS and PD were included. Methods: In total, 104 OCT B-scan images were processed using a custom superpixel segmentation (SpS) algorithm to detect boundary limits in the choroidal layer and the optical properties of the image. The algorithm groups pixels with similar structural properties to generate clusters with similar meaningful properties. Main outcomes: SpS selects and groups the superpixels in a segmented choroidal area, computing the choroidal optical image density (COID), measured as the standard mean gray level, and the total choroidal area (CA), measured as px2. Results: The CA and choroidal density (CD) were significantly reduced in the two neurodegenerative disease groups (higher in PD than in MS) versus the healthy subjects (p < 0.001); choroidal area was also significantly reduced in the MS group versus the healthy subjects. The COID increased significantly in the PD patients versus the MS patients and in the MS patients versus the healthy controls (p < 0.001). Conclusions: The SpS algorithm detected choroidal tissue boundary limits and differences optical density in MS and PD patients versus healthy controls. The application of the SpS algorithm to OCT images potentially acts as a non-invasive biomarker for the early diagnosis of MS and PD. Full article
(This article belongs to the Special Issue Neurodegenerative Diseases: Recent Advances and Future Perspectives)
Show Figures

Figure 1

25 pages, 12707 KiB  
Article
Unsupervised Nonlinear Hyperspectral Unmixing with Reduced Spectral Variability via Superpixel-Based Fisher Transformation
by Zhangqiang Yin and Bin Yang
Remote Sens. 2023, 15(20), 5028; https://doi.org/10.3390/rs15205028 - 19 Oct 2023
Cited by 2 | Viewed by 1879
Abstract
In hyperspectral unmixing, dealing with nonlinear mixing effects and spectral variability (SV) is a significant challenge. Traditional linear unmixing can be seriously deteriorated by the coupled residuals of nonlinearity and SV in remote sensing scenarios. For the simplification of calculation, current unmixing studies [...] Read more.
In hyperspectral unmixing, dealing with nonlinear mixing effects and spectral variability (SV) is a significant challenge. Traditional linear unmixing can be seriously deteriorated by the coupled residuals of nonlinearity and SV in remote sensing scenarios. For the simplification of calculation, current unmixing studies usually separate the consideration of nonlinearity and SV. As a result, errors individually caused by the nonlinearity or SV still persist, potentially leading to overfitting and the decreased accuracy of estimated endmembers and abundances. In this paper, a novel unsupervised nonlinear unmixing method accounting for SV is proposed. First, an improved Fisher transformation scheme is constructed by combining an abundance-driven dynamic classification strategy with superpixel segmentation. It can enlarge the differences between different types of pixels and reduce the differences between pixels corresponding to the same class, thereby reducing the influence of SV. Besides, spectral similarity can be well maintained in local homogeneous regions. Second, the polynomial postnonlinear model is employed to represent observed pixels and explain nonlinear components. Regularized by a Fisher transformation operator and abundances’ spatial smoothness, data reconstruction errors in the original spectral space and the transformed space are weighed to derive the unmixing problem. Finally, this problem is solved by a dimensional division-based particle swarm optimization algorithm to produce accurate unmixing results. Extensive experiments on synthetic and real hyperspectral remote sensing data demonstrate the superiority of the proposed method in comparison with state-of-the-art approaches. Full article
(This article belongs to the Special Issue Advances in Hyperspectral Remote Sensing Image Processing)
Show Figures

Graphical abstract

20 pages, 5028 KiB  
Article
Automatic Detection Method for Black Smoke Vehicles Considering Motion Shadows
by Han Wang, Ke Chen and Yanfeng Li
Sensors 2023, 23(19), 8281; https://doi.org/10.3390/s23198281 - 6 Oct 2023
Cited by 5 | Viewed by 1990
Abstract
Various statistical data indicate that mobile source pollutants have become a significant contributor to atmospheric environmental pollution, with vehicle tailpipe emissions being the primary contributor to these mobile source pollutants. The motion shadow generated by motor vehicles bears a visual resemblance to emitted [...] Read more.
Various statistical data indicate that mobile source pollutants have become a significant contributor to atmospheric environmental pollution, with vehicle tailpipe emissions being the primary contributor to these mobile source pollutants. The motion shadow generated by motor vehicles bears a visual resemblance to emitted black smoke, making this study primarily focused on the interference of motion shadows in the detection of black smoke vehicles. Initially, the YOLOv5s model is used to locate moving objects, including motor vehicles, motion shadows, and black smoke emissions. The extracted images of these moving objects are then processed using simple linear iterative clustering to obtain superpixel images of the three categories for model training. Finally, these superpixel images are fed into a lightweight MobileNetv3 network to build a black smoke vehicle detection model for recognition and classification. This study breaks away from the traditional approach of “detection first, then removal” to overcome shadow interference and instead employs a “segmentation-classification” approach, ingeniously addressing the coexistence of motion shadows and black smoke emissions. Experimental results show that the Y-MobileNetv3 model, which takes motion shadows into account, achieves an accuracy rate of 95.17%, a 4.73% improvement compared with the N-MobileNetv3 model (which does not consider motion shadows). Moreover, the average single-image inference time is only 7.3 ms. The superpixel segmentation algorithm effectively clusters similar pixels, facilitating the detection of trace amounts of black smoke emissions from motor vehicles. The Y-MobileNetv3 model not only improves the accuracy of black smoke vehicle recognition but also meets the real-time detection requirements. Full article
(This article belongs to the Special Issue Computer Vision Sensing and Pattern Recognition)
Show Figures

Figure 1

24 pages, 32950 KiB  
Article
Remote Sensing Image Haze Removal Based on Superpixel
by Yufeng He, Cuili Li and Tiecheng Bai
Remote Sens. 2023, 15(19), 4680; https://doi.org/10.3390/rs15194680 - 24 Sep 2023
Cited by 14 | Viewed by 2500
Abstract
The presence of haze significantly degrades the quality of remote sensing images, resulting in issues such as color distortion, reduced contrast, loss of texture, and blurred image edges, which can ultimately lead to the failure of remote sensing application systems. In this paper, [...] Read more.
The presence of haze significantly degrades the quality of remote sensing images, resulting in issues such as color distortion, reduced contrast, loss of texture, and blurred image edges, which can ultimately lead to the failure of remote sensing application systems. In this paper, we propose a superpixel-based visible remote sensing image dehazing algorithm, namely SRD. To begin, the remote sensing haze images are divided into content-aware patches using superpixels, which cluster adjacent pixels considering their similarities in color and brightness. We assume that each superpixel region shares the same atmospheric light and transmission properties. Subsequently, methods to estimate local atmospheric light and transmission within each superpixel are proposed. Unlike existing dehazing algorithms that assume a globally constant atmospheric light, our approach considers the global heterogeneous distribution of the atmospheric ambient light, which allows us to model it as a global non-uniform variable. Furthermore, we introduce an effective atmospheric light estimation method inspired by the maximum reflectance prior. Moreover, recognizing the wavelength-dependent nature of light transmission, we independently estimate the transmittance for each RGB channel of the input image. The quantitative and qualitative evaluation results of comprehensive experiments on synthetic datasets and real-world samples demonstrate the superior performance of the proposed algorithm compared to state-of-the-art methods for remote sensing image dehazing. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

21 pages, 7694 KiB  
Article
A Collaborative Superpixelwise Autoencoder for Unsupervised Dimension Reduction in Hyperspectral Images
by Chao Yao, Lingfeng Zheng, Longchao Feng, Fan Yang, Zehua Guo and Miao Ma
Remote Sens. 2023, 15(17), 4211; https://doi.org/10.3390/rs15174211 - 27 Aug 2023
Cited by 5 | Viewed by 1708
Abstract
The dimension reduction (DR) technique plays an important role in hyperspectral image (HSI) processing. Among various DR methods, superpixel-based approaches offer flexibility in capturing spectral–spatial information and have shown great potential in HSI tasks. The superpixel-based methods divide the samples into groups and [...] Read more.
The dimension reduction (DR) technique plays an important role in hyperspectral image (HSI) processing. Among various DR methods, superpixel-based approaches offer flexibility in capturing spectral–spatial information and have shown great potential in HSI tasks. The superpixel-based methods divide the samples into groups and apply the DR technique to the small groups. Nevertheless, we find these methods would increase the intra-class disparity by neglecting the fact the samples from the same class may reside on different superpixels, resulting in performance decay. To address this problem, a novel unsupervised DR named the Collaborative superpixelwise Auto-Encoder (ColAE) is proposed in this paper. The ColAE begins by segmenting the HSI into different homogeneous regions using a superpixel-based method. Then, a set of Auto-Encoders (AEs) is applied to the samples within each superpixel. To reduce the intra-class disparity, a manifold loss is introduced to restrict the samples from the same class, even if located in different superpixels, to have similar representations in the code space. In this way, the compact and discriminative spectral–spatial feature is obtained. Experimental results on three HSI data sets demonstrate the promising performance of ColAE compared to existing state-of-the-art methods. Full article
(This article belongs to the Special Issue Hyperspectral Remote Sensing Imaging and Processing)
Show Figures

Graphical abstract

22 pages, 14977 KiB  
Article
Balanced Cloud Shadow Compensation Method in High-Resolution Image Combined with Multi-Level Information
by Yubin Lei, Xianjun Gao, Yuan Kou, Baifa Wu, Yue Zhang and Bo Liu
Appl. Sci. 2023, 13(16), 9296; https://doi.org/10.3390/app13169296 - 16 Aug 2023
Cited by 1 | Viewed by 2058
Abstract
As clouds of different thicknesses block sunlight, large areas of cloud shadows with varying brightness can appear on the ground. Cloud shadows in high-resolution remote sensing images lead to uneven loss of image feature information. However, cloud shadows still retain feature information, and [...] Read more.
As clouds of different thicknesses block sunlight, large areas of cloud shadows with varying brightness can appear on the ground. Cloud shadows in high-resolution remote sensing images lead to uneven loss of image feature information. However, cloud shadows still retain feature information, and how to compensate for and restore unbalanced cloud shadow occlusion is of great significance in improving image quality. Though traditional shadow compensation methods can enhance the shaded brightness, the results are inconsistent in a single shadow region with over-compensated or insufficient compensation problems. Thus, this paper proposes a shadow-balanced compensation method combined with multi-level information. Multi-level information comprising the information of a shadow pixel, a local super-pixel centered with the pixel, the global cloud shadow region, and the global non-shadow region information, to comply with the cloud shadow’s internal difference. First, the original image is detected via the cloud shadow detection method and post-processing. The initial shadow is detected combined with designed complex shadow features and morphological shadow index features with threshold methods. Then, post-processing considering shadow area and morphological operation is applied to remove the small, non-cloud-shadow objects. Meanwhile, the initial image is also divided into super-pixel homogeneity regions using the super-pixel segmentation principle. A super-pixel region is between the pixel and the shadow area. Different from pixel and other window regions, it can provide a different measurement levels considering object homogeneity. Thus, a balanced compensation model is designed by combining the feature value of a shadow pixel and the mean and variance of a super-pixel, shadow region, and non-shadow region on the basis of the linear correlation correction principle. The super-pixel around the shadow pixel provides a local reliable homogenous region. It can reflect the internal difference inside the shadow region. Therefore, introducing a super-pixel in the proposed model can effectively compensate for the shaded information in a balanced way. Compared to those of only using pixel and shadow region information, the compensated results introduce super-pixel information, can deal with the homogenous region as a global one, and can be adaptive to the illustration differences in a cloud shadow. The experimental results show that compared to that of other reference methods, the quality of the proposed compensation result is better. The proposed method can enhance brightness and recover detailed information in shadow regions in a more balanced way. The issue of over-compensation and insufficient compensation inside a single shadow region can be resolved. Thus, the total result is similar to that of a non-shadow region. The proposed method can be used to recover the cloud shadow information more self-adaptively to improve image quality and usage in other applications. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

20 pages, 14922 KiB  
Article
Hyperspectral Image Dimensionality Reduction Algorithm Based on Spatial–Spectral Adaptive Multiple Manifolds
by Shufang Xu, Sijie Geng, Qi Yang and Hongmin Gao
Appl. Sci. 2023, 13(16), 9180; https://doi.org/10.3390/app13169180 - 11 Aug 2023
Cited by 1 | Viewed by 1900
Abstract
Hyperspectral images contain rich spatial–spectral information and have high dimensions, which can lead to challenges related to feature extraction for classification tasks, resulting in suboptimal performance. We propose a hyperspectral image dimensionality reduction algorithm based on spatial–spectral adaptive multiple manifolds to address the [...] Read more.
Hyperspectral images contain rich spatial–spectral information and have high dimensions, which can lead to challenges related to feature extraction for classification tasks, resulting in suboptimal performance. We propose a hyperspectral image dimensionality reduction algorithm based on spatial–spectral adaptive multiple manifolds to address the problem of small differences between features of dissimilar samples in the subspace caused by the uniform projection transformation in traditional dimensionality reduction methods. Firstly, to address spatial boundary mismatch problems caused by re-characterizing a pixel using pixels in a fixed area around it as its near neighbors in traditional algorithms, an adaptive weight representation method based on super-pixel segmentation is proposed, which enhances the similarity of similar samples and the dissimilarity of dissimilar samples. Secondly, to address the problem that a single manifold cannot completely characterize the near neighbor between samples of different categories, an adaptive multi-manifold representation method is proposed. The feature representation of the entire hyperspectral data in the low-dimensional subspace is obtained by adaptively fusing the intra- and inter-manifold maps constructed for each category of samples in the spatial and spectral dimensions. Experimental results on two public datasets show that the proposed method achieves better results when performing the hyperspectral image dimensionality reduction task. Full article
(This article belongs to the Special Issue Advances in Deep Learning for Hyperspectral Image Processing)
Show Figures

Figure 1

Back to TopTop