Next Article in Journal
A Deep Learning Network for Individual Tree Segmentation in UAV Images with a Coupled CSPNet and Attention Mechanism
Next Article in Special Issue
An Underwater Side-Scan Sonar Transfer Recognition Method Based on Crossed Point-to-Point Second-Order Self-Attention Mechanism
Previous Article in Journal
Improving the Spatial Prediction of Sand Content in Forest Soils Using a Multivariate Geostatistical Analysis of LiDAR and Hyperspectral Data
Previous Article in Special Issue
Spectral Swin Transformer Network for Hyperspectral Image Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Spatial Graph Convolution Network with Adaptive Spectral Aggregated Residuals for Multispectral Point Cloud Classification

1
Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
2
Yunnan Key Laboratory of Computer Technologies Application, Kunming University of Science and Technology, Kunming 650500, China
3
Beijing Anlu International Technology Co., Ltd., Beijing 100043, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(18), 4417; https://doi.org/10.3390/rs15184417
Submission received: 30 July 2023 / Revised: 30 August 2023 / Accepted: 5 September 2023 / Published: 7 September 2023

Abstract

:
Over an extended period, considerable research has focused on elaborated mapping in navigation systems. Multispectral point clouds containing both spatial and spectral information play a crucial role in remote sensing by enabling more accurate land cover classification and the creation of more accurate maps. However, existing graph-based methods often overlook the individual characteristics and information patterns in these graphs, leading to a convoluted pattern of information aggregation and a failure to fully exploit the spatial–spectral information to classify multispectral point clouds. To address these limitations, this paper proposes a deep spatial graph convolution network with adaptive spectral aggregated residuals (DSGCN-ASR). Specifically, the proposed DSGCN-ASR employs spatial graphs for deep convolution, using spectral graph aggregated information as residuals. This method effectively overcomes the limitations of shallow networks in capturing the nonlinear characteristics of multispectral point clouds. Furthermore, the incorporation of adaptive residual weights enhances the use of spatial–spectral information, resulting in improved overall model performance. Experimental validation was conducted on two datasets containing real scenes, comparing the proposed DSGCN-ASR with several state-of-the-art graph-based methods. The results demonstrate that DSGCN-ASR better uses the spatial–spectral information and produces superior classification results. This study provides new insights and ideas for the joint use of spatial and spectral information in the context of multispectral point clouds.

1. Introduction

Navigation is a field of immense importance in modern society, requiring the integration of various disciplines, such as cartography, geography, remote sensing technology, and computer science. Geographic information systems (GISs) and points of interest (POIs) are indispensable for effective navigation. In this context, remote sensing plays a vital role by providing essential components such as accurate base maps and precise land cover models. Thus, land cover classification has emerged as a fundamental research direction within the realm of remote sensing.
Since the early 2000s, the use of laser detection and ranging (LiDAR) technology has substantially contributed to the field of remote sensing. LiDAR has emerged as a valuable tool for collecting high-quality data, offering a rich and detailed data foundation for accurate and refined land cover classification. As an active remote sensing method, LiDAR provides distinct advantages in land cover analysis; for example, it is unaffected by environmental factors such as illumination, allowing for consistent data collection regarding the spatial distribution of land cover. This capability makes LiDAR a valuable tool for high-resolution and accurate land cover classification. However, because the point cloud is formed using non-Euclidean data with an irregular distribution, point cloud processing became a new challenge. Many studies have achieved notable success with LiDAR point clouds, of which the most classic is the Pointnet series [1,2,3].

1.1. Data Description

As an evolutionary LiDAR technology, airborne multispectral LiDAR systems can capture the spatial information of land cover while acquiring the spectral intensity of the corresponding points. Teledyne Optech unveiled the inaugural airborne multispectral LiDAR system in 2014, which operates across three channels. Channel 1 operates at a mid-infrared (MIR) wavelength of 1550 nm with a forward-looking angle of 3.5 degrees. Channel 2 operates at a near-infrared (NIR) wavelength of 1064 nm with a nadir-looking angle of zero degrees. Lastly, Channel 3 operates in the green spectrum with a wavelength of 532 nm and a forward-looking angle of seven degrees. Two datasets captured by the system are shown in Figure 1: the Harbor of Tobermory (HT) and the University of Houston (UH).

1.2. Related Literature

The emergence of multispectral LiDAR has enriched the information dimension of point cloud data. The multispectral point cloud inherits the ability of the traditional point cloud to characterize the spatial distribution of land cover while collecting corresponding spectral information for each point. With the increase in the information dimension, researchers have been faced with a new dilemma of how to effectively and jointly use the rich spatial–spectral information in multispectral point clouds.

1.2.1. Image-Oriented Methods

Several researchers have transformed 3D multispectral point clouds into 2D images to employ traditional image-oriented methods, such as support vector machine (SVM) [4], Adaboost [5], random forest [6], Markov random field [7], and conditional random field [8].
Other researchers have proposed deep learning models specifically designed for point clouds. Yu et al. introduced the CapViT model, a cross-context capsule vision transformer, for land cover classification using multispectral LiDAR data. It uses three streams of capsule transformer encoders to capture long-range global feature interactions at different context scales and effectively fuses cross-context feature semantics for accurate land cover type inferences [9]. ESA-CapsNet uses a novel capsule encoder–decoder architecture and a capsule-based attention module to extract informative feature semantics and enhance feature saliency and robustness [10]. Wang et al. proposed a neural network architecture for learning with point clouds that captures semantically similar structures in deeper layers despite the long distance between them in the original input space. The network utilizes a dynamic graph convolutional neural network (DGCNN) approach, which combines global shape structure with local neighborhood information to improve the learning process [11]. Liu et al. proposed RS-CNN [12], a relation–shape convolutional neural network that extends a regular-grid CNN to irregular configurations for point cloud analysis by learning from the geometric topology constraint among points. Shape awareness and robustness are achieved by learning a high-level relationship expression from predefined geometric priors, leading to contextual shape-aware learning for point cloud analysis. [13,14,15,16,17,18,19].
However, the transformations performed by these methods result in the loss of the original information of the multispectral point cloud.

1.2.2. Point-Oriented Methods

Traditional Methods Jing et al. proposed SE-Pointnet++ by embedding the squeeze-and-excitation block (SE block) into the Pointnet++ network to improve the performance of multispectral LiDAR point cloud classification by modeling the interdependence between channels. They utilized Pointnet++, DGCNN, GACNet, and RSCNN as comparison models to demonstrate the superiority of SE-Pointnet++ in accomplishing multispectral LiDAR point cloud feature classification [1,2,3,11,12]. Hu et al. proposed RandLA-Net, a lightweight neural architecture that uses random point sampling and a novel local feature aggregation module to efficiently perform semantic segmentation on large-scale 3D point clouds [20]. Wang et al. proposed a TMDE algorithm for extracting discriminative geometric–spectral features from multispectral point cloud data. The algorithm preserves the intraclass sample distribution and maximizes the distance between different classes [21,22,23,24].
Graph-Based Methods Graph neural networks have received increasing attention from researchers due to their inherent ability to accurately characterize non-Euclidean data [25]. Some examples of these networks include GAC [26], FR-GCNet [27], GACNN [28], and MaSGCN [29]. For graph-based methods, the most immediate challenge is effectively measuring the similarity between points to represent a multispectral point cloud as a graph. Once a suitable similarity metric is found, state-of-the-art graph neural networks such as GCN [30], GAT [31], GCBNet [32], and GCNII [33] can be used to classify multispectral point clouds.
Despite these advances, effective methods are still needed to utilize the spatial–spectral information contained in multispectral point clouds without losing valuable information. Further research in this area holds promise for advancing the field of multispectral point cloud analysis and classification.

1.3. Motivation and Contributions

Researchers have either used the spatial distance to construct a graph or spectral similarity or have simply combined the similarities of the two in equal proportions to produce a joint graph. To verify the advantages and disadvantages of these technical routes, we used a simple GCN to classify the two previously mentioned multispectral point cloud datasets using each of the above three methods to construct a graph. The classification results are visualized in Figure 2.
Upon visualizing the classification results, we found that the spatial graph tends to assign neighboring land covers to the same class, resulting in a contiguous distribution pattern. Conversely, the spectral graph exhibits superior capability in capturing long-range dependencies and effectively delineating boundaries between land cover types. However, the spectral graph also demonstrates a greater tendency to incorporate irrelevant land cover information and is more susceptible to interference from complex spectral signatures, such as those originating from water bodies. Ideally, the strengths of both should be combined to achieve finer classification while maintaining the robustness of the spatial graph and taking advantage of the high-quality performance of the spectral graph on the boundary. The visualization results (Figure 2c,f) show that simply combining the two does not achieve the ideal state, so achieving the reasonable joint use of spatial–spectral information is an important problem to be solved.
To address this problem, we developed a deep spatial graph convolution network with adaptive spectral aggregated residuals (DSGCN-ASR), which inputs both spatial and spectral graphs into the network. The proposed DSGCN-ASR uses a spatial graph to perform multiple layers of graph convolutions on multispectral point clouds and uses the information aggregated by the spectral graph as residuals, which are adaptively added during each convolution. Specifically, the main contributions can be summarized as follows:
1.
A novel framework was developed for the simultaneous use of spatial and spectral information in multispectral point clouds. The spatial and spectral graphs are treated differently to preserve the robustness of the spatial graph in capturing the nearby land cover relationships while harnessing the discriminative power of the spectral graph in distinguishing between various features in proximity.
2.
A deep graph neural network, DSGCN-ASR, was developed to learn the implicit relationships between points in a multispectral point cloud to overcome the insufficient capability of shallow graph neural networks in fitting the nonlinearity of multispectral point clouds in complex remote sensing scenes. Additionally, the spectral aggregated residuals were adaptively added to learn the spectral relationship between points, simultaneously addressing the oversmoothing problem of deep features.
The remainder of this paper is organized as follows. Section 2 describes the methodology and specific algorithms for the proposed DSGCN-ASR. Section 3 outlines the performance of the proposed DSGCN-ASR through experiments, and Section 4 provides the conclusions.

2. Methodology

In this section, we describe, in detail, the principles and implementation of the proposed DSGCN-ASR and provide the corresponding algorithm. The overall network structure is shown in Figure 3.

2.1. Construction of Spatial and Spectral Graphs

The data form of a multispectral point cloud can be viewed as a set of point clouds collected by multiple lasers of different wavelengths in the same scene. However, in practice, multiple bands of data are commonly integrated into a single point cloud. The integrated multispectral point cloud can be denoted as P = p 1 , p 2 , p 3 , p k R ( L + 3 ) × k , where L is the number of bands, and k is the number of points in the multispectral point cloud. A single point can be represented as p i = x , y , z , λ 1 , λ 2 , λ L , where i 1 , k is the index of the point.
For a graph ( G = V , E ), V is the set of nodes, and E is the set of edges. For each node (i), its corresponding feature ( x i ) can be represented by matrix X N × D , where N denotes the number of nodes, and D denotes the feature dimension of each node. For a multispectral point cloud, matrix X N × D corresponds to point set P = p 1 , p 2 , p 3 , p k R ( L + 3 ) × k , which can be obtained by transposing P . Thus, the number of nodes (N) is the number of points (k), and the number of features (D) is equal to ( L + 3 ) .
Regarding the set of edges (E), we separately measure the similarity between each point for the spatial and spectral information and compute two adjacency matrices, i.e., a spatial adjacency matrix and a spectral adjacency matrix. Specifically, we separately compute the Euclidean distance between points for the spatial and spectral information to obtain the distance matrix. Because the larger the Euclidean distance, the weaker the correlation between the points, each value in the matrix is subtracted from the maximum value in the distance matrix. Finally, the overall matrix is max–min-normalized to obtain adjacency matrices A s p a t i a l and A s p e c t r a l .
A s p a t i a l = N o r m a l i z e d D i s S p a t i a l . m a x D i s S p a t i a l
D i s S p a t i a l = D i s X + D i s Y + D i s Z
A s p e c t r a l = N o r m a l i z e d D i s S p e c t r a l . m a x D i s S p e c t r a l
D i s S p e c t r a l = D i s λ 1 + D i s λ 1 + + D i s λ L
where D i s S p a t i a l and D i s S p e c t r a l are the spatial and spectral distance matrices, respectively. D i s S p a t i a l . m a x is a matrix of the same dimension as D i s S p a t i a l , with each element corresponding to the max value in D i s S p a t i a l , and D i s S p e c t r a l . m a x is the same as for D i s S p e c t r a l . D i s X , D i s Y , D i s Z , D i s λ 1 , D i s λ 2 , ⋯ D i s λ L are the distance matrices of the spatial and spectral information. The calculation process is shown in Algorithm 1.
Algorithm 1: Construction of graphs for multispectral point cloud.
  • Input: Multispectral point cloud, P = p 1 , p 2 , p 3 , p k R ( L + 3 ) × k
  • Output: Feature matrix, X k × L + 3 , Spatial adjacency matrix, A s p a t i a l , Spectral adjacency matrix, A s p e c t r a l
1
Expand each point p i in multispectral point cloud
P = p 1 , p 2 , p 3 , p k R ( L + 3 ) × k into its feature vector p i = x , y , z , λ 1 , λ 2 , λ L , where ( i = 1 , 2 , 3 , k ) . P can be represent as x 1 y 1 z 1 λ 11 λ L 1 x 2 y 2 z 2 λ 12 λ L 2 x k y k z k λ 1 k λ L k
2
Split each column in multispectral point cloud P as separate vectors X , Y , Z , λ 1 , λ 2 ,⋯ λ L , then max-min normalize each vector.
3
For each vector, perform the following calculation, taking X as an example. D i s X = X 2 repeat 2 X X T + X 2 repea t T = x 1 x 1 2 x 1 x 2 2 x 1 x k 2 x 2 x 1 2 x 2 x 2 2 x 2 x k 2 x k x 1 2 x k x 2 2 x k x k 2 , X 2 repeat = x 1 2 x 1 2 x 2 2 x 2 2 x k 2 x k 2 .
Obtain the following matrix D i s X , D i s Y , D i s Z , D i s λ 1 , D i s λ 2 , ⋯ D i s λ L .
4
Feature matrix can be obtain as X k × L + 3 = X , Y , Z , λ 1 , λ 2 , λ L
5
Calculate the spatial and spectral distance matrices
D i s S p a t i a l = D i s X + D i s Y + D i s Z   D i s S p e c t r a l = D i s λ 1 + D i s λ 1 + D i s λ L
6
Calculate the spatial and spectral adjacency matrices
A s p a t i a l = N o r m a l i z e d D i s S p a t i a l . m a x D i s S p a t i a l ,
A s p e c t r a l = N o r m a l i z e d D i s S p e c t r a l . m a x D i s S p e c t r a l
7
Return:  X k × L + 3 , A s p a t i a l , A s p e c t r a l

2.2. Deep Spatial Graph Convolution Network with Adaptive Spectral Aggregated Residuals

DSGCN-ASR effectively addresses the limitations of shallow graph neural networks in capturing the nonlinearity of multispectral point clouds in complex remote sensing scenes. In addition, it tackles the problem of previous methods lacking the ability to fully exploit joint spatial–spectral information. By incorporating several key techniques, DSGCN-ASR provides enhanced modeling and classification capabilities, ensuring the optimal use of spatial–spectral information.
Convolutional neural networks (CNNs) are important in the field of computer vision. The central CNN technique involves extracting features using a convolutional kernel by weighting the pixel values in the neighborhood of the pixel. Similarly, graph convolution aggregates information from related nodes according to the set of edges (E) to achieve feature extraction. The generalization of the graph convolutional network can be denoted as
H l + 1 = f H l , A = σ A H l W l
where H denotes the hidden layer of the network, A denotes the adjacency matrix, l is the index of the hidden layer, and W l is the weight parameter matrix of the lth layer.
As the convolutional neural network aggregates information based on the pixel neighborhoods of the images and, according to Tobler’s first law of geography, a stronger correlation exists between neighboring land covers, we use the spatial graph for graph convolution operations in the deep backbone network. This also allows the model to extract more complex and abstract features, strengthening its capacity to capture the intricate nonlinearity present in the data. Additionally, DSGCN-ASR incorporates the adaptive spectral aggregated residuals (ASR) technique. ASR adaptively adjusts the weights of spectral features from the multiple channels in multispectral data. The feature obtained after graph convolution of the spectral graph is used as the residuals to be added to the hidden layer. Specifically, two initial graph convolutions of the same dimension are introduced before the backbone. After initial convolution of the spatial graph, the initial input feature ( H 0 ) is obtained; after initial convolution of the spectral graph, the spectral residual feature ( R ) is obtained. The initial convolution can be represented as
H 0 = σ A s p a t i a l X k × L + 3 W s p a t i a l
R = σ A s p e c t r a l X k × L + 3 W s p e c t r a l
where H 0 serves as the input to the backbone, and R serves as the residual.
The pattern of combining spectral residuals is shown in Figure 4. In the aggregation of spectral residuals, we use both concatenation and summation. The spectral residuals ( R ) are concatenated to the right of the hidden layer ( H l ) before convolution in each layer. Given the use of spatial graphs in the graph convolution operation, we classify the hidden layer features of the network as spatial. As such, the residuals aggregated using spectral graphs are classified as spectral. To balance the contributions of spatial–spectral information, we introduce a trainable adaptive parameter ( α ). After convolution, the spectral residuals ( R ) are summed with the hidden layer using an adaptive weight ( α ) and added as residuals to the new hidden layer. This adaptive weighting mechanism enables the model to focus on the most informative channels, enhancing its ability to capture nonlinearity and improving classification accuracy. Thus, hidden layer propagation with adaptive spectral residuals can be represented as
H l + 1 = σ A s p a t i a l H l A s p a t i a l H l R R W l + 1 α H l + α R
where the adaptive weight ( α ) is a trainable parameter defined in the network. To ensure that the weight of the spectral residuals does not exceed 0.5 for each addition, we apply a sigmoid function to α and divide it by 2 before each use.
α = sigmoid α α = sigmoid α 2 2
To address the issue of deep-feature oversmoothing, we introduce a weight parameter, denoted as β , which decreases as the depth of the network increases. This weight parameter balances the contribution of the deep network weights, inspired by the concept of identity mapping in previous research [33]. As the network layers deepen, the contribution of convolution to forward propagation diminishes, while the adaptive combination of spectral residuals with the previous hidden layer gradually takes precedence. This approach effectively alleviates the problem of the oversmoothing of deep features caused by excessive feature aggregation. The idea of identity mapping has been demonstrated to be effective in mitigating this issue in previous studies, and we incorporate this idea by introducing β . The final hidden-layer propagation pattern can be denoted using the following equation:
H l + 1 = σ β A s p a t i a l H l A s p a t i a l H l R R W l + 1 β 1 α H l + α R
where β decays with the layer number (l) of the network as follows:
β = ln 1 1 2 l 2 l + 1
In the proposed DSGCN-ASR, we use negative log-likelihood loss to train the model. The loss function can be denoted as
L o s s = i = 1 k j = 1 C y i j log p i j
where k is the number of points in the dataset, C is the number of classes, y i j is the ground truth of the ith point belonging to the jth class, and p i j is the predicted probability of the ith point belonging to the jth class.
By integrating these techniques, DSGCN-ASR effectively enhances the collaborative use of spatial–spectral information and strengthens the nonlinear fitting capability. Consequently, it provides notable advancements compared with prior methods in the classification performance of multispectral point clouds in complex remote sensing scenes.

3. Experiments

We conducted a series of comparative experiments, ablation studies, and parametric analyses using the proposed DSGCN-ASR. Two multispectral point cloud datasets of real scenes were used to conduct the experiments, i.e., Harbor of Tobermory (HT) and University of Houston (UH), as shown in Figure 1.
The HT dataset was further subjected to manual labeling, incorporating nine distinct classes, namely barren, building, car, grass, powerline, road, ship, and tree, following the labeling scheme established in a previous study [21]. The UH dataset underwent manual classification into eight classes, encompassing barren, car, commercial buildings, grass, road, powerline, residential buildings, and tree, as shown in Figure 5. All experiments were performed on a device with an Intel (R) Core (TM) CPU i5-12600KF @3.70 GHz and one NVIDIA GeForce RTX 3060 GPU with 12 GB of memory. However, because HT contains 7,181,982 points and UH contains 4,436,470 points, which cannot be directly processed by the current device, we used a previously reported method [34] to segments the multispectral point cloud into superpoints. The HT dataset was segmented into 9606 superpoints, and UH was segmented into 9350 superpoints. We used 10% of the superpoints as the training set.
To numerically measure the multispectral point cloud classification performance, we used precision, recall, F score, and IoU to evaluate each set of experiments. The above metrics were used for each class. To evaluate the overall performance of the whole scene, we used macro averaging to calculate the above metrics, in addition to overall accuracy (OA).

3.1. Comparative Experiments

To validate the performance of the proposed DSGCN-ASR, several state-of-the-art graph neural networks (GCN [30], GAT [31], GCBNet [32], GCNII [33], and MaSGCN [29]), were selected to classify multispectral point clouds and were comparatively analyzed. For this comparison, we constructed the graphs following the method outlined in Algorithm 1 and combined the spatial–spectral graphs on the same scale.
A = N o r m a l i z e d A s p a t i a l + A s p e c t r a l 2
The input points were all X k × L + 3 , and the same 10% were used as training samples. The final classification results of all methods were remapped back to the original data according to the index of the superpoints and evaluated on the original data. The labels of the superpoints were generated based on the voting of the labels of the points within the same superpoint. Therefore, the segmentation of superpoints inevitably caused a loss of classification performance, which we later specifically analyzed.

3.1.1. HT Classification Results

The overall evaluation metrics for the HT classification results are shown in Table 1. The proposed DSGCN-ASR outperforms the other methods overall in the classification of HT, with an OA of 87.57%, macro precision of 74.23%, macro recall of 69.45%, macro F score of 71.76%, and MIoU of 59.51%. The OA of DSGCN-ASR is higher than that of the second-best method by 2.73%, outperforming the next-best method by 1.74% and 2.14% for macro recall and MIoU, respectively.
The classification results on the HT dataset are visualized in Figure 6. The visualized results show that the proposed DSGCN-ASR learns the information of the spatial distribution of the land cover as expected and achieves a fine delineation of the boundary with the help of the spectral information.
The other methods produced different apparent misclassifications; in contrast, the classification result of the proposed DSGCN-ASR is more in line with the ground truth. GCN and GAT produced cluttered distributions of misclassified points, GCNII and GCBNet classified large areas of water as car, and MaSGCN classified an area of water as barren.
The evaluation metrics for the classification results of each class are shown in Table 2. The classification results of the proposed DSGCN-ASR are more balanced, performing relatively well in all classes. In particular, the proposed method outperforms the other methods in classifying building, grass, tree, and water classes. Combining the visualizations revealed that the DSGCN-ASR confused the barren and road; tree and powerline; and car, ship, and building classes because the spectral information of these classes is similar, and they are relatively close in spatial distribution, which makes distinguishing them challenging. In addition, the the small number of powerline points in the dataset may have hindered the ability to learn these features, leading to poor performance.
Car is a small land cover target, and the spectral information associated with the car class is relatively complex. This results in a low number of points for car in the original point cloud data, so car is easily confused with barren. As such, when segmenting the multispectral point cloud, we were unable to oversegment it due to the memory limitations of the experimental equipment, which led to a reduction in the number of effective samples. As shown in Figure 6b, superpoint segmentation strongly impacted car classifications, which is the reason for the poor performance of all models on this class. A similar situation occurred for other classes.

3.1.2. UH Classification Results

The overall evaluation metrics for the classification results on the UH dataset are shown in Table 3. In the UH classification, DSGCN-ASR performs better than the other methods, with an OA of 78.20%, macro precision of 73.03%, macro recall of 65.41%, macro F score of 69.21%, and MIoU of 54.02%. The OA of DSGCN-ASR is higher than that of the second-best method by 8.73%, outperforming the second-best method by 3.04%, 0.80%, and 3.07% for macro recall, macro F score, and MIoU, respectively.
The classification results on the UH dataset are visualized in Figure 7. The figure demonstrates that the classification results produced by the proposed DSGCN-ASR are close to the ground truth and performance limit. This is especially evident in the parking lot area in the upper-right corner of the scene. In addition, the rectangular area in the middle of the scene demonstrates the contrast among the methods. The ground truth for this area is regular rectangular barren land; however, GCNII, GCBNet, and MasGCN all misclassify this area as road or car. GCN and GAT perform relatively better in this area but are more disturbed than the proposed DSGCN-ASR. However, DSGCN-ASR, as with MaSGCN, incorrectly classifies road in the upper-right corner of the scene as barren. Overall, the proposed DSGCN-ASR retains the robustness of the spatial graph regarding land cover distribution with less interference on the UH dataset when using the spectral graph to enhance the accuracy of boundary classification.
The evaluation metrics for the classification results for each class are shown in Table 4. The proposed DSGCN-ASR achieved relatively high-quality performance for all classes; however, the metrics for car are poor. Combining the visualizations, we concluded that the poor classification of car was due to the effect of superpoint segmentation, as shown in Figure 7b. The proposed DSGCN-ASR provides substantial advantages over the other methods in the classes of barren, road, powerline, and tree. This conclusion is consistent with the visualization results. Commercial and residential buildings are difficult to distinguish because they are both buildings that have similar spatial and spectral information. However, the proposed DSGCN-ASR outperforms the other methods on the UH dataset in general.

3.2. Ablation Studies

Ablation studies were conducted to validate the effectiveness of the proposed joint-use scheme of spatial–spectral graphs. Different experimental groups were set up by controlling the graphs used in the backbone and residuals, which were used to analyze the respective contributions of the spatial and spectral graphs in the network and to validate the proposed DSGCN-ASR.
For each dataset, we conducted the following sets of experiments: (a) using the spatial graph in the backbone and the residuals; (b) using the spatial graph in the backbone and the spectral graph in the residuals; (c) using an equal-scale combined spatial–spectral graph (Equation (13)) in the backbone and residuals; (d) using the spectral graph in the backbone and the spatial graph in the residuals; and (e) using the spectral graph in the backbone and the residuals. The setup of experiments is shown in Table 5.
The overall evaluation metrics for the ablation results on the HT and UH datasets are shown in Table 6; the evaluation metrics for each class on the HT and UH datasets are shown in Table 7 and Table 8, respectively. The ablation results are visualized in Figure 8. In the experimental setup, the spatial graph contributed progressively less, and the spectral graph progressively dominated from groups I to V. The difficulty in achieving accurate classification performance using only one of the spatial and spectral graphs was noted based on the results for groups I and V. The results for groups I and V are consistent with our analysis in the Introduction, with spatial graphs tending to classify spatially adjacent points into the same class and spectral graphs being better at distinguishing spatially neighboring land cover.
The experiments in group III showed that the equiscale combination of spatial and spectral graphs, to some extent, could increase the accuracy of classification and achieve relatively good metrics. However, this combination also inherits the drawbacks of both graphs, with a simultaneous lack of clear distinction at the boundaries and interference from the chaotic spectral information.
Numerically, group II, which is also used in the proposed DSGCN-ASR, achieved the best performance among the five groups of ablation investigations. Compared with the second-best group, group II was 0.48% ahead in OA, 2.1% ahead in macro recall, 0.35% ahead in macro F score, and 1.83% ahead in MIoU on the HT dataset. On the UH dataset, group II achieved a 2.60% OA lead, a 5.81% macro precision lead, a 4.25% macro F-score lead, and a 4.55% MIoU lead. Group IV, using the spectral graphs in the backbone and the spatial graphs in the residuals, also achieved good results, slightly outperforming group III overall. This tangentially corroborated the superiority of distinguishing the use of spatial and spectral graphs as proposed in this study.
Within the network architecture, the integration of information between spatial- and spectral-graph-based aggregation is primarily governed by a trainable adaptive weight. This allows groups II and IV to achieve an approximate integration of information. Group II employs the spatial graph in the backbone for convolution, effectively accessing the spatial distribution of relationships among land cover classes. In comparison, group IV uses the spectral graph for convolution, prioritizing the spectral similarities between land cover classes. However, this approach results in the inclusion of some irrelevant connections. This finding is indirectly supported by the observation of Figure 8d,i, where the visualization of the results reveals numerous scattered, misclassified points. This performance aligns with that of group V. The outcomes of the ablation studies provide evidence for the practicality and effectiveness of the proposed joint spatial–spectral use strategy employed by DSGCN-ASR.

3.3. Parametric Analysis

We then conducted experiments to analyze the impact of the α and β parameters on the classification performance. We specifically focused on these parameters while keeping all other settings constant. We set α to a fixed value of 0, 0.25, 0.5, 0.75, or 1 for comparison with the case of an adaptive α . We followed the same approach for β and performed five sets of experiments with values of 0, 0.25, 0.5, 0.75, and 1 for comparison with a decreasing β . The results of the parametric analysis experiment for α are visualized in Figure 9 and in Figure 10 for β . The evaluation metrics for the parametric analysis of α and β are shown in Table 9 and Table 10, respectively. To more intuitively show the impact of the parameters on the classification results, we also plotted histograms and line graphs, as shown in Figure 11.
The α parameter plays a crucial role in controlling the weight of the spectral residuals in each layer of the network. As α increases, the model incorporates more spectral information, enhancing its ability to differentiate between different land cover classes in the immediate neighborhood. However, excessively large values of α can compromise the robustness of the model, leading to patchier misclassifications. These findings align with the conclusions drawn in the Introduction. The adaptive spectral residual strategy employed in our approach allows the model to autonomously adjust the acquisition weights of spectral information in each layer. As a result, the final classification performance is substantially superior to that achieved by other groups using a fixed α .
Deep graph convolutional networks often oversmooth deep features. To tackle this issue, we introduced the β parameter based on the concept of identity mapping [33]. The value of β determines the proportion of the hidden layer features obtained from the previous convolution in the model. The experimental results indicate that as β increases, the model becomes more susceptible to the oversmoothing of deep features. The model fails in the two groups where β exceeds 0.5. Our approach employs the strategy of a decreasing β with an increasing number of model layers, which was validated in a previous study [33]. Once again, this strategy proves effective in mitigating the problem of oversmoothing in our approach.
The histograms and line graphs provide a clearer visualization of the impact of the α and β parameters on the classification performance of the model. These visual representations highlight the advantage of our developed strategy in the parametric analysis experiments. The results demonstrate the effectiveness of our approach in improving classification performance.

4. Conclusions

This study focused on the classification of multispectral point cloud data, and we developed a novel method called DSGCN-ASR. In contrast to existing methods, DSGCN-ASR adopts a differentiated treatment of spatial and spectral graphs, effectively leveraging their respective advantages to enhance classification performance. By preserving the robustness of the spatial graph for extraction of land cover relationships and applying the discriminatory ability of the spectral graph to distinguish neighboring land cover classes, DSGCN-ASR achieves superior classification performance. Experimental validation using real-world multispectral point cloud datasets and comparisons with state-of-the-art graph-based methods demonstrated the efficacy of DSGCN-ASR in effectively leveraging spatial–spectral information. This study provides valuable insights into the joint use of spatial–spectral information in multispectral point clouds, contributing to accurate mapping and fine-grained land cover classification. Further exploration of this method holds promise for the advancement of the field of elaborate mapping in navigation systems.

Author Contributions

Conceptualization, Q.W., Z.Z. and T.S.; methodology, Q.W. and Z.Z.; software, Z.Z.; validation, Z.Z.; formal analysis, Z.Z., X.C. and J.S.; resources, Q.W., J.S. and T.S.; data curation, Z.Z. and X.C.; writing—original draft preparation, Z.Z.; writing—review and editing, Q.W. and J.S.; visualization, Z.Z.; supervision, T.S.; project administration, Z.W.; funding acquisition, Z.W., Q.W., and T.S. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded in part by the Youth Project of the National Natural Science Foundation of China under grant 62201237, the Yunnan Fundamental Research Projects under grants 202101BE070001-008 and 202301AV070003, the Youth Project of the Xingdian Talent Support Plan of Yunnan Province under grant KKRD202203068, and the Major Science and Technology Projects in Yunnan Province under grants 202202AD080013 and 202302AG050009.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The HT dataset used in this study was obtained through an online application at https://www.isprs.org/news/newsletter/2017-03/index.html (accessed in March 2017 ). The UH dataset was obtained from http://www.classic.grss-ieee.org/community/technical-committees/data-fusion/2018-ieee-grss-data-fusion-contest/ (accessed on 16 February 2017), and the data were manually labeled.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
  2. Jing, Z.; Guan, H.; Zhao, P.; Li, D.; Yu, Y.; Zang, Y.; Wang, H.; Li, J. Multispectral LiDAR point cloud classification using SE-PointNet++. Remote Sens. 2021, 13, 2516. [Google Scholar] [CrossRef]
  3. Chen, Y.; Liu, G.; Xu, Y.; Pan, P.; Xing, Y. PointNet++ network architecture with individual point level and global features on centroid for ALS point cloud classification. Remote Sens. 2021, 13, 472. [Google Scholar] [CrossRef]
  4. Zhang, J.; Lin, X.; Ning, X. SVM-based classification of segmented airborne LiDAR point clouds in urban areas. Remote Sens. 2013, 5, 3749–3775. [Google Scholar] [CrossRef]
  5. Lodha, S.K.; Fitzpatrick, D.M.; Helmbold, D.P. Aerial lidar data classification using adaboost. In Proceedings of the Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007), Montreal, QC, Canada, 21–23 August 2007; pp. 435–442. [Google Scholar]
  6. Chehata, N.; Guo, L.; Mallet, C. Airborne lidar feature selection for urban classification using random forests. In Proceedings of the Laserscanning, Paris, France, 1–2 September 2009. [Google Scholar]
  7. Munoz, D.; Bagnell, J.A.; Vandapel, N.; Hebert, M. Contextual classification with functional max-margin markov networks. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 975–982. [Google Scholar]
  8. Niemeyer, J.; Wegner, J.D.; Mallet, C.; Rottensteiner, F.; Soergel, U. Conditional random fields for urban scene classification with full waveform LiDAR data. In Proceedings of the Photogrammetric Image Analysis: ISPRS Conference, PIA 2011, Munich, Germany, 5–7 October 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 233–244. [Google Scholar]
  9. Yu, Y.; Jiang, T.; Gao, J.; Guan, H.; Li, D.; Gao, S.; Tang, E.; Wang, W.; Tang, P.; Li, J. CapViT: Cross-context capsule vision transformers for land cover classification with airborne multispectral LiDAR data. Int. J. Appl. Earth Obs. Geoinf. 2022, 111, 102837. [Google Scholar] [CrossRef]
  10. Yu, Y.; Liu, C.; Guan, H.; Wang, L.; Gao, S.; Zhang, H.; Zhang, Y.; Li, J. Land cover classification of multispectral lidar data with an efficient self-attention capsule network. IEEE Geosci. Remote Sens. Lett. 2021, 19, 6501505. [Google Scholar] [CrossRef]
  11. Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic graph cnn for learning on point clouds. Acm Trans. Graph. 2019, 38, 146. [Google Scholar] [CrossRef]
  12. Liu, Y.; Fan, B.; Xiang, S.; Pan, C. Relation-shape convolutional neural network for point cloud analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 8895–8904. [Google Scholar]
  13. Bakuła, K.; Kupidura, P.; Jełowicki., Ł. Testing of land cover classification from multispectral airborne laser scanning data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 161–169. [Google Scholar] [CrossRef]
  14. Morsy, S.; Shaker, A.; El-Rabbany, A. Multispectral LiDAR data for land cover classification of urban areas. Sensors 2017, 17, 958. [Google Scholar] [CrossRef]
  15. Sun, J.; Shi, S.; Chen, B.; Du, L.; Yang, J.; Gong, W. Combined application of 3D spectral features from multispectral LiDAR for classification. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 5264–5267. [Google Scholar]
  16. Teo, T.A.; Wu, H.M. Analysis of land cover classification using multi-wavelength LiDAR system. Appl. Sci. 2017, 7, 663. [Google Scholar] [CrossRef]
  17. Matikainen, L.; Hyyppä, J.; Litkey, P. Multispectral airborne laser scanning for automated map updating. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 323–330. [Google Scholar] [CrossRef]
  18. Pan, S.; Guan, H.; Chen, Y.; Yu, Y.; Gonçalves, W.N.; Junior, J.M.; Li, J. Land-cover classification of multispectral LiDAR data using CNN with optimized hyper-parameters. Isprs J. Photogramm. Remote Sens. 2020, 166, 241–254. [Google Scholar] [CrossRef]
  19. Yu, Y.; Guan, H.; Li, D.; Gu, T.; Wang, L.; Ma, L.; Li, J. A hybrid capsule network for land cover classification using multispectral LiDAR data. IEEE Geosci. Remote Sens. Lett. 2019, 17, 1263–1267. [Google Scholar] [CrossRef]
  20. Hu, Q.; Yang, B.; Xie, L.; Rosa, S.; Guo, Y.; Wang, Z.; Trigoni, N.; Markham, A. Randla-net: Efficient semantic segmentation of large-scale point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11108–11117. [Google Scholar]
  21. Wang, Q.; Gu, Y. A discriminative tensor representation model for feature extraction and classification of multispectral LiDAR data. IEEE Trans. Geosci. Remote Sens. 2019, 58, 1568–1586. [Google Scholar] [CrossRef]
  22. Niemeyer, J.; Rottensteiner, F.; Sörgel, U.; Heipke, C. Hierarchical higher order crf for the classification of airborne lidar point clouds in urban areas. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 655–662. [Google Scholar] [CrossRef]
  23. Wichmann, V.; Bremer, M.; Lindenberger, J.; Rutzinger, M.; Georges, C.; Petrini-Monteferri, F. Evaluating the potential of multispectral airborne lidar for topographic mapping and land cover classification. Isprs Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 2, 113–119. [Google Scholar] [CrossRef]
  24. Anguelov, D.; Taskarf, B.; Chatalbashev, V.; Koller, D.; Gupta, D.; Heitz, G.; Ng, A. Discriminative learning of markov random fields for segmentation of 3d scan data. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 169–176. [Google Scholar]
  25. Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; Philip, S.Y. A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 4–24. [Google Scholar] [CrossRef]
  26. Wang, L.; Huang, Y.; Hou, Y.; Zhang, S.; Shan, J. Graph attention convolution for point cloud semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 10296–10305. [Google Scholar]
  27. Zhao, P.; Guan, H.; Li, D.; Yu, Y.; Wang, H.; Gao, K.; Junior, J.M.; Li, J. Airborne multispectral LiDAR point cloud classification with a feature reasoning-based graph convolution network. Int. J. Appl. Earth Obs. Geoinf. 2021, 105, 102634. [Google Scholar] [CrossRef]
  28. Wen, C.; Li, X.; Yao, X.; Peng, L.; Chi, T. Airborne LiDAR point cloud classification with global-local graph attention convolution neural network. Isprs J. Photogramm. Remote Sens. 2021, 173, 181–194. [Google Scholar] [CrossRef]
  29. Wang, Q.; Gu, Y.; Yang, M.; Wang, C. Multi-attribute smooth graph convolutional network for multispectral points classification. Sci. China Technol. Sci. 2021, 64, 2509–2522. [Google Scholar] [CrossRef]
  30. Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. In Proceedings of the International Conference on Learning Representations, Toulon, France, 24–26 April 2017; pp. 1–14. [Google Scholar]
  31. Velickovic, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; Bengio, Y. Graph attention networks. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
  32. Zhang, T.; Wang, X.; Xu, X.; Chen, C.P. GCB-Net: Graph convolutional broad network and its application in emotion recognition. IEEE Trans. Affect. Comput. 2019, 13, 379–388. [Google Scholar] [CrossRef]
  33. Chen, M.; Wei, Z.; Huang, Z.; Ding, B.; Li, Y. Simple and deep graph convolutional networks. In Proceedings of the International Conference on Machine Learning, Virtual, 13–18 July 2020; pp. 1725–1735. [Google Scholar]
  34. Lin, Y.; Wang, C.; Zhai, D.; Li, W.; Li, J. Toward better boundary preserved supervoxel segmentation for 3D point clouds. Isprs J. Photogramm. Remote Sens. 2018, 143, 39–47. [Google Scholar] [CrossRef]
Figure 1. Visualization of two scenes of multispectral point cloud datasets: (a) Harbor of Tobermory (HT) and (b) University of Houston (UH).
Figure 1. Visualization of two scenes of multispectral point cloud datasets: (a) Harbor of Tobermory (HT) and (b) University of Houston (UH).
Remotesensing 15 04417 g001
Figure 2. Visualization of differences between three technical routes for constructing graphs from two datasets: (a) classification with a spatial graph for the HT dataset; (b) classification with a spectral graph for the HT dataset; (c) classification with a combined graph on the HT dataset; (d) classification with a spatial graph for the UH dataset. (e) classification with a spectral graph for the UH dataset. (f) classification with a combined graph for the UH dataset.
Figure 2. Visualization of differences between three technical routes for constructing graphs from two datasets: (a) classification with a spatial graph for the HT dataset; (b) classification with a spectral graph for the HT dataset; (c) classification with a combined graph on the HT dataset; (d) classification with a spatial graph for the UH dataset. (e) classification with a spectral graph for the UH dataset. (f) classification with a combined graph for the UH dataset.
Remotesensing 15 04417 g002
Figure 3. Overall structure of the proposed DSGCN-ASR.
Figure 3. Overall structure of the proposed DSGCN-ASR.
Remotesensing 15 04417 g003
Figure 4. Combining patterns of adaptive spectral residuals in each layer.
Figure 4. Combining patterns of adaptive spectral residuals in each layer.
Remotesensing 15 04417 g004
Figure 5. Visualization of ground truth for two datasets: (a) HT; (b) UH.
Figure 5. Visualization of ground truth for two datasets: (a) HT; (b) UH.
Remotesensing 15 04417 g005
Figure 6. Visualization of classification results on HT dataset. (a) Visualization of the ground truth; (b) performance limit due to superpoint segmentation; visualization of classification results of (c) GCN, (d) GCNII, (e) GAT, (f) GCBNet, (g) MaSGCN, and (h) DSGCN-ASR (ours).
Figure 6. Visualization of classification results on HT dataset. (a) Visualization of the ground truth; (b) performance limit due to superpoint segmentation; visualization of classification results of (c) GCN, (d) GCNII, (e) GAT, (f) GCBNet, (g) MaSGCN, and (h) DSGCN-ASR (ours).
Remotesensing 15 04417 g006
Figure 7. Visualization of classification results on the UH dataset: (a) visualization of the ground truth; (b) performance limit due to superpoint segmentation. Visualization of classification results of (c) GCN, (d) GCNII, (e) GAT, (f) GCBNet, (g) MaSGCN, and (h) DSGCN-ASR (ours).
Figure 7. Visualization of classification results on the UH dataset: (a) visualization of the ground truth; (b) performance limit due to superpoint segmentation. Visualization of classification results of (c) GCN, (d) GCNII, (e) GAT, (f) GCBNet, (g) MaSGCN, and (h) DSGCN-ASR (ours).
Remotesensing 15 04417 g007
Figure 8. Visualization of ablation results: (a) group I, (b) group II, (c) group III, (d) group IV, and (e) group V on the HT dataset; (f) group I, (g) group II, (h) group III, (i) group IV, and (j) group V on the UH dataset.
Figure 8. Visualization of ablation results: (a) group I, (b) group II, (c) group III, (d) group IV, and (e) group V on the HT dataset; (f) group I, (g) group II, (h) group III, (i) group IV, and (j) group V on the UH dataset.
Remotesensing 15 04417 g008
Figure 9. Visualization of the parametric analysis experiment for α when set to (a) 0, (b) 0.25, (c) 0.5, (d) 0.75, (e) 1, and (f) an adaptive value on the HT dataset and (g) 0, (h) 0.25, (i) 0.5, (j) 0.75, (k) 1, and (l) an adaptive value on the UH dataset.
Figure 9. Visualization of the parametric analysis experiment for α when set to (a) 0, (b) 0.25, (c) 0.5, (d) 0.75, (e) 1, and (f) an adaptive value on the HT dataset and (g) 0, (h) 0.25, (i) 0.5, (j) 0.75, (k) 1, and (l) an adaptive value on the UH dataset.
Remotesensing 15 04417 g009
Figure 10. Visualization of the parametric analysis experiment for β when set to (a) 0, (b) 0.25, (c) 0.5, (d) 0.75, (e) 1, and (f) a decreasing value on the HT dataset and (g) 0 (h) 0.25, (i) 0.5, (j) 0.75, (k) 1, and (l) a decreasing value on the UH dataset.
Figure 10. Visualization of the parametric analysis experiment for β when set to (a) 0, (b) 0.25, (c) 0.5, (d) 0.75, (e) 1, and (f) a decreasing value on the HT dataset and (g) 0 (h) 0.25, (i) 0.5, (j) 0.75, (k) 1, and (l) a decreasing value on the UH dataset.
Remotesensing 15 04417 g010
Figure 11. Histograms and line graphs of results of parametric analysis experiment. (a) Histogram of α on the HT dataset. (b) Line graph of α on the HT dataset. (c) Histogram of α on the UH dataset. (d) Line graph of α on the UH dataset. (e) Histogram of β on the HT dataset. (f) Line graph of β on the HT dataset. (g) Histogram of β on the UH dataset. (h) Line graph of β on the UH dataset.
Figure 11. Histograms and line graphs of results of parametric analysis experiment. (a) Histogram of α on the HT dataset. (b) Line graph of α on the HT dataset. (c) Histogram of α on the UH dataset. (d) Line graph of α on the UH dataset. (e) Histogram of β on the HT dataset. (f) Line graph of β on the HT dataset. (g) Histogram of β on the UH dataset. (h) Line graph of β on the UH dataset.
Remotesensing 15 04417 g011
Table 1. Overall evaluation metrics (%) for classification results on the HT dataset.
Table 1. Overall evaluation metrics (%) for classification results on the HT dataset.
MethodGCN [30]GCNII [33]GAT [31]GCBNet [32]MaSGCN [29]DSGCN-ASR (Ours)
OA81.3684.8384.7784.7082.8187.57
Macro precision72.3079.1771.7177.8469.5574.23
Macro recall61.3264.6862.0466.5867.7169.45
Macro F score66.3671.1966.5371.7768.6271.76
MIoU51.8455.9653.0857.3654.9459.51
Maximum values in the same metrics are marked in bold.
Table 2. Evaluation metrics (%) for classification results in each class on the HT dataset.
Table 2. Evaluation metrics (%) for classification results in each class on the HT dataset.
MethodClassBarrenBuildingCarGrassPowerlineRoadShipTreeWater
GCN [30]Precision75.50 72.8833.4287.7066.3782.3857.3483.8591.24
Recall82.4569.9720.2481.094.8970.5734.6199.2088.85
F-score78.8271.3925.2184.279.1176.0243.1690.8890.03
IoU65.0455.5114.4272.814.7761.3127.5283.2981.86
GCNII [33]Precision71.4279.1538.9788.6082.1788.5782.5889.3591.71
Recall85.8877.7128.4088.449.1267.8031.0699.4094.29
F-score77.9978.4232.8688.5216.4276.8045.1494.1192.98
IoU63.9264.5019.6679.418.9462.3429.1588.8786.89
GAT [31]Precision72.3771.5329.1488.4651.5079.4670.7592.6089.59
Recall79.6769.6418.8682.9521.4867.7835.5398.9883.46
F-score75.8570.5722.9085.6230.3273.1647.3195.6886.42
IoU61.0954.5312.9374.8517.8757.6730.9891.7276.08
GCBNet [32]Precision62.4890.0442.2388.3769.8791.3975.0290.5490.63
Recall85.5169.9732.9681.7641.2758.9232.4299.3397.02
F-score72.2178.7537.0284.9451.8971.6545.2794.7393.71
IoU56.5064.9522.7273.8235.0455.8329.2689.9988.17
MaSGCN [29]Precision59.2981.4340.3271.7441.0673.8867.5194.9095.82
Recall71.2173.5657.0167.0233.7655.6756.1997.4197.60
F-score64.7077.2947.2369.3037.0563.5061.3396.1396.70
IoU47.8262.9930.9253.0222.7446.5244.2392.5693.62
DSGCN-ASR (ours)Precision72.1888.3629.2890.9569.9577.0847.0295.4197.86
Recall78.4078.7438.7386.3433.1264.8148.6499.0597.24
F-score75.1683.2733.3588.5944.9670.4147.8197.1997.55
IoU60.2171.3420.0179.5129.0054.3431.4294.5495.21
Maximum values in the same metrics are marked in bold.
Table 3. Overall evaluation metrics (%) for classification results on the UH dataset.
Table 3. Overall evaluation metrics (%) for classification results on the UH dataset.
MethodGCN [30]GCNII [33]GAT [31]GCBNet [32]MaSGCN [29]DSGCN-ASR (Ours)
OA67.3966.8061.3169.4764.5378.20
Macro precision67.7672.7566.3175.2668.4873.03
Macro recall54.3058.2950.8762.3757.8665.41
Macro F score60.2964.7257.5768.2162.7269.01
MIoU42.8946.5238.8150.9545.0354.02
Maximum values in the same metrics are marked in bold.
Table 4. Evaluation metrics (%) for classification results in each class on the UH dataset.
Table 4. Evaluation metrics (%) for classification results in each class on the UH dataset.
MethodClassBarrenCarCommercialGrassRoadPowerlineResidentialTree
GCN [30]Precision54.1336.2270.8481.6276.4770.7874.6077.39
Recall80.0515.4149.3174.4751.7616.4151.9495.05
F score64.5921.6358.1477.8861.7326.6461.2485.32
IoU47.7012.1240.9963.7744.6515.3744.1374.39
GCNII [33]Precision44.9545.8081.14 84.5385.1881.9377.3681.10
Recall83.7913.0059.8773.3349.1322.3468.8196.07
F score58.5120.2668.9078.5362.3135.1072.8387.95
IoU41.3511.2752.5664.6545.2621.2957.2778.50
GAT [31]Precision38.8855.2859.7980.3877.5362.6879.1276.85
Recall80.0012.6531.7074.4547.1320.4046.7793.85
F score52.3320.5941.4377.3058.6230.7858.7984.50
IoU35.4411.4826.1363.0041.4718.1941.6373.16
GCBNet [32]Precision46.9460.3580.2087.1185.2572.8683.0786.28
Recall85.3820.1671.4178.0046.5330.0471.4695.97
F score60.5830.2275.5582.3060.2042.5476.8390.87
IoU43.4517.8060.7169.9343.0627.0262.3783.26
MaSGCN [29]Precision43.2039.2872.7185.1878.1464.9686.0278.32
Recall80.5220.9773.9366.4942.9733.8949.7694.35
F score56.2327.3573.3174.6855.4544.5463.0585.59
IoU39.1115.8457.8759.5938.3628.6546.0474.81
DSGCN-ASR (ours)Precision75.7833.8967.2883.0272.7079.4983.8288.28
Recall80.7228.6071.0080.4668.6532.3967.0594.42
F score78.1831.0269.0981.7270.6246.0274.5191.24
IoU64.1718.3652.7769.0954.5829.8959.3783.90
Maximum values in the same metrics are marked in bold.
Table 5. Experimental setup for ablation studies.
Table 5. Experimental setup for ablation studies.
GroupBackboneResiduals
ISpatial GraphSpatial Graph
II *Spatial GraphSpectral Graph
IIICombined GraphCombined Graph
IVSpectral GraphSpatial Graph
VSpectral GraphSpectral Graph
* In the proposed DSGCN-ASR, we use the spatial graph in the backbone and the spectral graph in the residual, as in II.
Table 6. Evaluation metrics (%) for ablation studies.
Table 6. Evaluation metrics (%) for ablation studies.
DatasetGroupSetupOAMacro PrecisionMacro RecallMacro F ScoreMIoU
HTISpatial–Spatial70.8567.3355.7360.9942.79
II *Spatial–Spectral87.57 74.2369.4571.7659.51
IIICombined–Combined83.0467.9767.3567.6652.14
IVSpectral–Spatial87.0976.7066.8171.4157.67
VSpectral–Spectral77.8763.1952.7457.4943.02
UHISpatial–Spatial68.5067.2355.9061.0443.50
II *Spatial–Spectral78.2073.0365.4169.0154.02
IIICombined–Combined75.5963.1165.5164.2947.60
IVSpectral–Spatial74.8166.4163.1864.7649.46
VSpectral–Spectral68.9463.8355.8759.5844.61
* In the proposed DSGCN-ASR, we use the spatial graph in the backbone and the spectral graph in the residual, as in group II. Maximum values in the same metrics are marked in bold.
Table 7. Evaluation metrics (%) for ablation studies in each class on the HT dataset.
Table 7. Evaluation metrics (%) for ablation studies in each class on the HT dataset.
GroupClassBarrenBuildingCarGrassPowerlineRoadShipTreeWater
IPrecision18.0568.0246.27 79.7153.9576.3880.0884.6098.93
Recall72.9763.9548.9344.935.2142.8034.0796.9691.77
F score28.9465.9347.5657.479.5054.8647.8090.3695.22
IoU16.9249.1731.2040.324.9837.8031.4082.4290.87
II *Precision72.1888.3629.2890.9569.9577.0847.0295.4197.86
Recall78.4078.7438.7386.3433.1264.8148.6499.0597.24
F score75.1683.2733.3588.5944.9670.4147.8197.1997.55
IoU60.2171.3420.0179.5129.0054.3431.4294.5495.21
IIIPrecision91.1287.9024.2674.4873.9225.1448.6393.1993.04
Recall61.0474.3827.8096.3614.5068.7470.3399.2393.75
F score73.1180.5825.9184.0224.2436.8257.5096.1293.39
IoU57.6267.4814.8872.4413.7922.5640.3592.5287.61
IVPrecision78.3284.5539.7688.4182.7680.7654.1893.0888.48
Recall80.9980.0734.7194.1816.8069.9734.5898.7291.23
F score79.6382.2537.0691.2027.9474.9842.2195.8289.83
IoU66.1669.8522.7583.8316.2459.9726.7591.9881.54
VPrecision61.7675.0214.2391.4668.6078.7339.4486.6652.80
Recall86.4852.656.7084.1119.9659.2223.6097.8244.07
F score72.0661.889.1187.6330.9267.6029.5391.9048.04
IoU56.3244.804.7777.9818.2951.0617.3285.0231.62
* In the proposed DSGCN-ASR, we use the spatial graph in the backbone and the spectral graph in the residual, as in group II. Maximum values in the same metrics are marked in bold.
Table 8. Evaluation metrics (%) for ablation studies in each class on the UH dataset.
Table 8. Evaluation metrics (%) for ablation studies in each class on the UH dataset.
GroupClassBarrenCarCommercialGrassRoadPowerlineResidentialTree
IPrecision64.1855.39 58.5180.6759.5763.5681.9873.95
Recall77.8128.1552.8462.4560.1524.3348.8992.55
F score70.3437.3355.5370.4059.8635.1961.2582.21
IoU54.2522.9538.4454.3242.7121.3544.1569.80
II *Precision75.7833.8967.2883.0272.7079.4983.8288.28
Recall80.7228.6071.0080.4668.6532.3967.0594.42
F score78.1831.0269.0981.7270.6246.0274.5191.24
IoU64.1718.3652.7769.0954.5829.8959.3783.90
IIIPrecision80.3726.7724.2573.3758.1165.0985.7591.21
Recall73.4722.6782.8684.4669.8344.1851.4895.16
F score76.7624.5537.5278.5363.4352.6364.3393.14
IoU62.2913.9923.0964.6446.4535.7147.4287.16
IVPrecision65.6134.6352.9883.1779.1942.1283.2190.40
Recall82.2025.3170.1178.6358.5946.6250.5793.45
F score72.9729.2560.3580.8467.3544.2562.9091.90
IoU57.4517.1343.2267.8450.7828.4145.8885.02
VPrecision56.1631.2948.6782.8969.4267.4363.0991.66
Recall77.0010.6128.6480.3153.3441.6462.4992.90
F score64.9515.8536.0681.5860.3351.4862.7992.27
IoU48.098.6022.0068.8943.1934.6745.7685.66
* In the proposed DSGCN-ASR, we use the spatial graph in the backbone and the spectral graph in the residual, as in group II. Maximum values in the same metrics are marked in bold.
Table 9. The evaluation metrics (%) for parametric analysis of α .
Table 9. The evaluation metrics (%) for parametric analysis of α .
Dataset α OAMacro PrecisionMacro-RecallMacro F ScoreMIoU
HT084.3872.0066.2969.0355.96
0.2579.6865.8959.8562.7348.24
0.582.1371.4964.1367.6153.16
0.7578.5971.1961.4865.9850.42
180.5171.6058.8764.6149.66
Adaptive *87.57 74.2369.4571.7659.51
UH061.6862.5253.3457.5640.42
0.2566.2668.3559.4163.5746.04
0.570.8670.4759.0564.2647.89
0.7568.6971.2757.6863.7645.56
167.9269.2056.8162.3944.94
Adaptive *78.2073.0365.4169.0154.02
* In the proposed DSGCN-ASR, we use an adaptive α value. Maximum values in the same metrics are marked in bold.
Table 10. The evaluation metrics (%) for parametric analysis of β .
Table 10. The evaluation metrics (%) for parametric analysis of β .
Dataset β OAMacro PrecisionMacro RecallMacro F ScoreMIoU
HT080.9468.5859.3763.6447.76
0.2581.3670.3167.1768.7054.27
0.579.6870.6459.7164.7249.12
0.7514.8833.62NANNAN10.03
11.0513.18NANNAN0.25
Decreasing *87.57 74.2369.4571.7659.51
UH061.1467.5852.9259.3640.03
0.2570.3767.7062.0164.7348.71
0.573.0667.7462.2564.8848.77
0.7523.3441.63NANNAN12.10
114.5412.50NANNAN1.82
Decreasing *78.2073.0365.4169.0154.02
* In the proposed DSGCN-ASR, we use a decreasing β . Maximum values in the same metrics are marked in bold.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Q.; Zhang, Z.; Chen, X.; Wang, Z.; Song, J.; Shen, T. Deep Spatial Graph Convolution Network with Adaptive Spectral Aggregated Residuals for Multispectral Point Cloud Classification. Remote Sens. 2023, 15, 4417. https://doi.org/10.3390/rs15184417

AMA Style

Wang Q, Zhang Z, Chen X, Wang Z, Song J, Shen T. Deep Spatial Graph Convolution Network with Adaptive Spectral Aggregated Residuals for Multispectral Point Cloud Classification. Remote Sensing. 2023; 15(18):4417. https://doi.org/10.3390/rs15184417

Chicago/Turabian Style

Wang, Qingwang, Zifeng Zhang, Xueqian Chen, Zhifeng Wang, Jian Song, and Tao Shen. 2023. "Deep Spatial Graph Convolution Network with Adaptive Spectral Aggregated Residuals for Multispectral Point Cloud Classification" Remote Sensing 15, no. 18: 4417. https://doi.org/10.3390/rs15184417

APA Style

Wang, Q., Zhang, Z., Chen, X., Wang, Z., Song, J., & Shen, T. (2023). Deep Spatial Graph Convolution Network with Adaptive Spectral Aggregated Residuals for Multispectral Point Cloud Classification. Remote Sensing, 15(18), 4417. https://doi.org/10.3390/rs15184417

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop