Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = sparse superpixel graph

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 1967 KB  
Article
A Symmetric Multiscale Feature Fusion Architecture Based on CNN and GNN for Hyperspectral Image Classification
by Yaoqun Xu, Junyi Wang, Zelong You and Xin Li
Symmetry 2025, 17(11), 1930; https://doi.org/10.3390/sym17111930 - 11 Nov 2025
Viewed by 761
Abstract
Convolutional neural networks (CNNs) and graph convolutional networks (GCNs) have been widely applied to hyperspectral image classification tasks, but both exhibit certain limitations. To address these issues, this paper proposes a multi-scale feature fusion architecture (MCGNet). Symmetry serves as the core design principle [...] Read more.
Convolutional neural networks (CNNs) and graph convolutional networks (GCNs) have been widely applied to hyperspectral image classification tasks, but both exhibit certain limitations. To address these issues, this paper proposes a multi-scale feature fusion architecture (MCGNet). Symmetry serves as the core design principle of MCGNet, where its parallel CNN-GCN branches and multi-scale fusion mechanism strike a balance between local spectral-spatial features and global graph structural dependencies, effectively reducing redundancy and enhancing generalization capabilities. The architecture comprises four modules: the Spectral Noise Suppression (SNS) module enhances the signal-to-noise ratio of spectral features; the Local Spectral Extraction (LSE) module employs deep separable convolutions to extract local spectral-spatial features; Superpixel-level Graph Convolution (SGC), performing graph convolution on superpixel graphs to precisely capture dependencies between object regions; Pixel-level Graph Convolution (PGC), constructed via adaptive sparse pixel graphs based on spectral and spatial similarity to accurately capture irregular boundaries and fine-grained non-local relationships between pixels. These modules form a symmetric, hierarchical feature learning pipeline integrated within a unified framework. Experiments on three public datasets—Indian Pine, Pavia University, and Salinas—demonstrate that MCGNet outperforms baseline methods in overall accuracy, average precision, and Kappa coefficient. This symmetric design not only enhances classification performance but also endows the model with strong theoretical interpretability and cross-dataset robustness, highlighting the significance of symmetry principles in hyperspectral image analysis. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

19 pages, 1948 KB  
Article
Graph-MambaRoadDet: A Symmetry-Aware Dynamic Graph Framework for Road Damage Detection
by Zichun Tian, Xiaokang Shao and Yuqi Bai
Symmetry 2025, 17(10), 1654; https://doi.org/10.3390/sym17101654 - 5 Oct 2025
Viewed by 1244
Abstract
Road-surface distress poses a serious threat to traffic safety and imposes a growing burden on urban maintenance budgets. While modern detectors based on convolutional networks and Vision Transformers achieve strong frame-level performance, they often overlook an essential property of road environments—structural symmetry [...] Read more.
Road-surface distress poses a serious threat to traffic safety and imposes a growing burden on urban maintenance budgets. While modern detectors based on convolutional networks and Vision Transformers achieve strong frame-level performance, they often overlook an essential property of road environments—structural symmetry within road networks and damage patterns. We present Graph-MambaRoadDet (GMRD), a symmetry-aware and lightweight framework that integrates dynamic graph reasoning with state–space modeling for accurate, topology-informed, and real-time road damage detection. Specifically, GMRD employs an EfficientViM-T1 backbone and two DefMamba blocks, whose deformable scanning paths capture sub-pixel crack patterns while preserving geometric symmetry. A superpixel-based graph is constructed by projecting image regions onto OpenStreetMap road segments, encoding both spatial structure and symmetric topological layout. We introduce a Graph-Generating State–Space Model (GG-SSM) that synthesizes sparse sample-specific adjacency in O(M) time, further refined by a fusion module that combines detector self-attention with prior symmetry constraints. A consistency loss promotes smooth predictions across symmetric or adjacent segments. The full INT8 model contains only 1.8 M parameters and 1.5 GFLOPs, sustaining 45 FPS at 7 W on a Jetson Orin Nano—eight times lighter and 1.7× faster than YOLOv8-s. On RDD2022, TD-RD, and RoadBench-100K, GMRD surpasses strong baselines by up to +6.1 mAP50:95 and, on the new RoadGraph-RDD benchmark, achieves +5.3 G-mAP and +0.05 consistency gain. Qualitative results demonstrate robustness under shadows, reflections, back-lighting, and occlusion. By explicitly modeling spatial and topological symmetry, GMRD offers a principled solution for city-scale road infrastructure monitoring under real-time and edge-computing constraints. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

20 pages, 4058 KB  
Article
SCSU–GDO: Superpixel Collaborative Sparse Unmixing with Graph Differential Operator for Hyperspectral Imagery
by Kaijun Yang, Zhixin Zhao, Qishen Yang and Ruyi Feng
Remote Sens. 2025, 17(17), 3088; https://doi.org/10.3390/rs17173088 - 4 Sep 2025
Cited by 1 | Viewed by 1263
Abstract
In recent years, remarkable advancements have been achieved in hyperspectral unmixing (HU). Sparse unmixing, in which models mix pixels as linear combinations of endmembers and their corresponding fractional abundances, has become a dominant paradigm in hyperspectral image analysis. To address the inherent limitations [...] Read more.
In recent years, remarkable advancements have been achieved in hyperspectral unmixing (HU). Sparse unmixing, in which models mix pixels as linear combinations of endmembers and their corresponding fractional abundances, has become a dominant paradigm in hyperspectral image analysis. To address the inherent limitations of spectral-only approaches, spatial contextual information has been integrated into unmixing. In this article, a superpixel collaborative sparse unmixing algorithm with graph differential operator (SCSU–GDO), is proposed, which effectively integrates superpixel-based local collaboration with graph differential spatial regularization. The proposed algorithm contains three key steps. First, superpixel segmentation partitions the hyperspectral image into homogeneous regions, leveraging boundary information to preserve structural coherence. Subsequently, a local collaborative weighted sparse regression model is formulated to jointly enforce data fidelity and sparsity constraints on abundance estimation. Finally, to enhance spatial consistency, the Laplacian matrix derived from graph learning is decomposed into a graph differential operator, adaptively capturing local smoothness and structural discontinuities within the image. Comprehensive experiments on three datasets prove the accuracy, robustness, and practical efficacy of the proposed method. Full article
Show Figures

Figure 1

21 pages, 3507 KB  
Article
WSSGCN: Hyperspectral Forest Image Classification via Watershed Superpixel Segmentation and Sparse Graph Convolutional Networks
by Pingfei Chen, Xuyang Li, Yong Peng, Xiangsuo Fan and Qi Li
Forests 2025, 16(5), 827; https://doi.org/10.3390/f16050827 - 15 May 2025
Cited by 1 | Viewed by 950
Abstract
Hyperspectral image classification is crucial in remote sensing but faces challenges in forest ecosystem studies due to high-dimensional data, spectral variability, and spatial heterogeneity. Watershed Superpixel Segmentation and Sparse Graph Convolutional Networks (WSSGCN), a novel framework designed for efficient forest image classification, is [...] Read more.
Hyperspectral image classification is crucial in remote sensing but faces challenges in forest ecosystem studies due to high-dimensional data, spectral variability, and spatial heterogeneity. Watershed Superpixel Segmentation and Sparse Graph Convolutional Networks (WSSGCN), a novel framework designed for efficient forest image classification, is introduced in this paper. Watershed superpixel segmentation is first used by the method to divide hyperspectral images into semantically consistent regions, reducing computational complexity while preserving terrain boundary information. On this basis, a dual-branch model is designed: a local branch with multi-scale convolutional neural networks (CNN) extracts spatial–spectral features, while a global branch constructs superpixel graphs and uses GCNs to model the global context. To enhance efficiency, a sparse tensor-based storage method is proposed for the adjacency matrix, reducing complexity from quadratic to linear. Additionally, an attention-based adaptive fusion strategy dynamically balances local and global features. Experiments on multiple datasets show that WSSGCN outperforms mainstream methods in overall accuracy (OA), average accuracy (AA), and Kappa coefficient. Notably, it achieves a 3.5% OA improvement and a 0.04 Kappa coefficient increase compared to SPEFORMER on the WHU-Hi-HongHu dataset. Practicality in resource-limited scenarios is ensured by sparse graph modeling. This work offers an efficient solution for forest monitoring, supporting applications like biodiversity assessment and deforestation tracking, and advances remote sensing-based forest ecosystem analysis. The proposed approach shows strong potential for real-world ecological conservation and forest management. Full article
Show Figures

Figure 1

23 pages, 10089 KB  
Article
Multiple Superpixel Graphs Learning Based on Adaptive Multiscale Segmentation for Hyperspectral Image Classification
by Chunhui Zhao, Boao Qin, Shou Feng and Wenxiang Zhu
Remote Sens. 2022, 14(3), 681; https://doi.org/10.3390/rs14030681 - 31 Jan 2022
Cited by 17 | Viewed by 4444
Abstract
Hyperspectral image classification (HSIC) methods usually require more training samples for better classification performance. However, a large number of labeled samples are difficult to obtain because it is cost- and time-consuming to label an HSI in a pixel-wise way. Therefore, how to overcome [...] Read more.
Hyperspectral image classification (HSIC) methods usually require more training samples for better classification performance. However, a large number of labeled samples are difficult to obtain because it is cost- and time-consuming to label an HSI in a pixel-wise way. Therefore, how to overcome the problem of insufficient accuracy and stability under the condition of small labeled training sample size (SLTSS) is still a challenge for HSIC. In this paper, we proposed a novel multiple superpixel graphs learning method based on adaptive multiscale segmentation (MSGLAMS) for HSI classification to address this problem. First, the multiscale-superpixel-based framework can reduce the adverse effect of improper selection of a superpixel segmentation scale on the classification accuracy while saving the cost to manually seek a suitable segmentation scale. To make full use of the superpixel-level spatial information of different segmentation scales, a novel two-steps multiscale selection strategy is designed to adaptively select a group of complementary scales (multiscale). To fix the bias and instability of a single model, multiple superpixel-based graphical models obatined by constructing superpixel contracted graph of fusion scales are developed to jointly predict the final results via a pixel-level fusion strategy. Experimental results show that the proposed MSGLAMS has better performance when compared with other state-of-the-art algorithms. Specifically, its overall accuracy achieves 94.312%, 99.217%, 98.373% and 92.693% on Indian Pines, Salinas and University of Pavia, and the more challenging dataset Houston2013, respectively. Full article
Show Figures

Figure 1

18 pages, 33972 KB  
Article
Hyperspectral Image Classification Based on Sparse Superpixel Graph
by Yifei Zhao and Fengqin Yan
Remote Sens. 2021, 13(18), 3592; https://doi.org/10.3390/rs13183592 - 9 Sep 2021
Cited by 8 | Viewed by 3821
Abstract
Hyperspectral image (HSI) classification is one of the major problems in the field of remote sensing. Particularly, graph-based HSI classification is a promising topic and has received increasing attention in recent years. However, graphs with pixels as nodes generate large size graphs, thus [...] Read more.
Hyperspectral image (HSI) classification is one of the major problems in the field of remote sensing. Particularly, graph-based HSI classification is a promising topic and has received increasing attention in recent years. However, graphs with pixels as nodes generate large size graphs, thus increasing the computational burden. Moreover, satisfactory classification results are often not obtained without considering spatial information in constructing graph. To address these issues, this study proposes an efficient and effective semi-supervised spectral-spatial HSI classification method based on sparse superpixel graph (SSG). In the constructed sparse superpixels graph, each vertex represents a superpixel instead of a pixel, which greatly reduces the size of graph. Meanwhile, both spectral information and spatial structure are considered by using superpixel, local spatial connection and global spectral connection. To verify the effectiveness of the proposed method, three real hyperspectral images, Indian Pines, Pavia University and Salinas, are chosen to test the performance of our proposal. Experimental results show that the proposed method has good classification completion on the three benchmarks. Compared with several competitive superpixel-based HSI classification approaches, the method has the advantages of high classification accuracy (>97.85%) and rapid implementation (<10 s). This clearly favors the application of the proposed method in practice. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

19 pages, 4033 KB  
Article
Label Noise Cleansing with Sparse Graph for Hyperspectral Image Classification
by Qingming Leng, Haiou Yang and Junjun Jiang
Remote Sens. 2019, 11(9), 1116; https://doi.org/10.3390/rs11091116 - 10 May 2019
Cited by 16 | Viewed by 5008
Abstract
In a real hyperspectral image classification task, label noise inevitably exists in training samples. To deal with label noise, current methods assume that noise obeys the Gaussian distribution, which is not the real case in practice, because in most cases, we are more [...] Read more.
In a real hyperspectral image classification task, label noise inevitably exists in training samples. To deal with label noise, current methods assume that noise obeys the Gaussian distribution, which is not the real case in practice, because in most cases, we are more likely to misclassify training samples at the boundaries between different classes. In this paper, we propose a spectral–spatial sparse graph-based adaptive label propagation (SALP) algorithm to address a more practical case, where the label information is contaminated by random noise and boundary noise. Specifically, the SALP mainly includes two steps: First, a spectral–spatial sparse graph is constructed to depict the contextual correlations between pixels within the same superpixel homogeneous region, which are generated by superpixel image segmentation, and then a transfer matrix is produced to describe the transition probability between pixels. Second, after randomly splitting training pixels into “clean” and “polluted,” we iteratively propagate the label information from “clean” to “polluted” based on the transfer matrix, and the relabeling strategy for each pixel is adaptively adjusted along with its spatial position in the corresponding homogeneous region. Experimental results on two standard hyperspectral image datasets show that the proposed SALP over four major classifiers can significantly decrease the influence of noisy labels, and our method achieves better performance compared with the baselines. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Graphical abstract

22 pages, 8731 KB  
Article
Salient Region Detection Using Diffusion Process with Nonlocal Connections
by Huiyuan Luo, Guangliang Han, Peixun Liu and Yanfeng Wu
Appl. Sci. 2018, 8(12), 2526; https://doi.org/10.3390/app8122526 - 6 Dec 2018
Cited by 4 | Viewed by 3032
Abstract
Diffusion-based salient region detection methods have gained great popularity. In most diffusion-based methods, the saliency values are ranked on 2-layer neighborhood graph by connecting each node to its neighboring nodes and the nodes sharing common boundaries with its neighboring nodes. However, only considering [...] Read more.
Diffusion-based salient region detection methods have gained great popularity. In most diffusion-based methods, the saliency values are ranked on 2-layer neighborhood graph by connecting each node to its neighboring nodes and the nodes sharing common boundaries with its neighboring nodes. However, only considering the local relevance between neighbors, the salient region may be heterogeneous and even wrongly suppressed, especially when the features of salient object are diverse. In order to address the issue, we present an effective saliency detection method using diffusing process on the graph with nonlocal connections. First, a saliency-biased Gaussian model is used to refine the saliency map based on the compactness cue, and then, the saliency information of compactness is diffused on 2-layer sparse graph with nonlocal connections. Second, we obtain the contrast of each superpixel by restricting the reference region to the background. Similarly, a saliency-biased Gaussian refinement model is generated and the saliency information based on the uniqueness cue is propagated on the 2-layer sparse graph. We linearly integrate the initial saliency maps based on the compactness and uniqueness cues due to the complementarity to each other. Finally, to obtain a highlighted and homogeneous saliency map, a single-layer updating and multi-layer integrating scheme is presented. Comprehensive experiments on four benchmark datasets demonstrate that the proposed method performs better in terms of various evaluation metrics. Full article
Show Figures

Figure 1

Back to TopTop