Spatial–Spectral Feature Fusion and Spectral Reconstruction of Multispectral LiDAR Point Clouds by Attention Mechanism
Abstract
1. Introduction
- DossaNet is an improved approach based on an attention mechanism and a learnable module that aims to adaptively adjust the weighting coefficients based on spatial–spectral features. This approach enables spectral reconstruction while precisely providing multispectral point clouds for subsequent LC classification.
- We propose a spatial–spectral attention (SSA) reconstruction module. Using a feature concatenation approach, SSA successfully integrates spatial and spectral features, thereby achieving the complementary advantages of multimodal features and enhancing spectral reconstruction.
- Our spectral reconstruction approach demonstrates good generalizability and applies to most models. Compared with models without spectral reconstruction, our approach makes some improvements in many metrics, with an overall accuracy (OA) exceeding 82.80%. Specifically, the OA of PointNet++ increased by 4.8%, RandLA-Net improved by 5.93%, and DGCNN increased by 1%. Furthermore, our approach improves the classification accuracy of most models when compared with the IDW and KNN methods.
2. Area and Dataset Partitioning
3. Methodology
3.1. Single-Channel Point Cloud Preprocessing
3.2. Dual Spectral Reconstruction Method Based on Spatial–Spectral Attention
3.2.1. SA Spectral Reconstruction Module
3.2.2. SSA Spectral Optimization Module
3.2.3. Mask L1 Loss
3.2.4. Point Cloud Reconstruction Quality Evaluation
3.3. Point Cloud Classification Verification Method
4. Result
4.1. Single-Channel Point Cloud Preprocessing Results
4.2. Dual MSL Cloud Reconstruction Based on SSA
4.3. Classification of LC Types
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Chen, W.; Zhao, R.; Lu, H. Response of ecological environment quality to land use transition based on dryland oasis ecological index (DOEI) in dryland: A case study of oasis concentration area in Middle Heihe River, China. Ecol. Indic. 2024, 165, 112214. [Google Scholar] [CrossRef]
- Buchner, J.; Yin, H.; Frantz, D.; Kuemmerle, T.; Askerov, E.; Bakuradze, T.; Bleyhl, B.; Elizbarashvili, N.; Komarova, A.; Lewińska, K.E.; et al. Land-cover change in the Caucasus Mountains since 1987 based on the topographic correction of multi-temporal Landsat composites. Remote Sens. Environ. 2020, 248, 111967. [Google Scholar] [CrossRef]
- Eva, E.A.; Marzen, L.J.; Lamba, J.; Ahsanullah, S.M.; Mitra, C. Projection of land use and land cover changes based on land change modeler and integrating both land use land cover and climate change on the hydrological response of Big Creek Lake Watershed, South Alabama. J. Environ. Manage. 2024, 370, 122923. [Google Scholar] [CrossRef] [PubMed]
- Tong, X.-Y.; Xia, G.-S.; Lu, Q.; Shen, H.; Li, S.; You, S.; Zhang, L. Land-cover classification with high-resolution remote sensing images using transferable deep models. Remote Sens. Environ. 2020, 237, 111322. [Google Scholar] [CrossRef]
- Li, W.; Sun, K.; Li, W.; Wei, J.; Miao, S.; Gao, S.; Zhou, Q. Aligning semantic distribution in fusing optical and SAR images for land use classification. ISPRS J. Photogramm. Remote Sens. 2023, 199, 272–288. [Google Scholar] [CrossRef]
- Zhang, W.; Li, W.; Zhang, C.; Hanink, D.M.; Li, X.; Wang, W. Parcel-based urban land use classification in megacity using airborne LiDAR, high resolution orthoimagery, and Google Street View. Comput. Environ. Urban Syst. 2017, 64, 215–228. [Google Scholar] [CrossRef]
- Jin, H.; Mountrakis, G. Fusion of optical, radar and waveform LiDAR observations for land cover classification. ISPRS J. Photogramm. Remote Sens. 2022, 187, 171–190. [Google Scholar] [CrossRef]
- Xu, Z.; Guan, K.; Casler, N.; Peng, B.; Wang, S. A 3D convolutional neural network method for land cover classification using LiDAR and multi-temporal Landsat imagery. ISPRS J. Photogramm. Remote Sens. 2018, 144, 423–434. [Google Scholar] [CrossRef]
- Zhang, Y.; Gao, H.; Zhou, J.; Zhang, C.; Ghamisi, P.; Xu, S.; Li, C.; Zhang, B. A cross-modal feature aggregation and enhancement network for hyperspectral and LiDAR joint classification. Expert Syst. Appl. 2024, 258, 125145. [Google Scholar] [CrossRef]
- Yan, W.Y.; Shaker, A.; El-Ashmawy, N. Urban land cover classification using airborne LiDAR data: A review. Remote Sens. Environ. 2015, 158, 295–310. [Google Scholar] [CrossRef]
- Chen, B.; Shi, S.; Gong, W.; Sun, J.; Guo, K.; Du, L.; Yang, J.; Xu, Q.; Song, S. A spectrally improved point cloud classification method for multispectral LiDAR. Int. Arch. Photogramm. Remote Sens. Spatial. Inf. Sci. 2020, XLIII-B3-2020, 501–505. [Google Scholar] [CrossRef]
- Wang, L.; Lu, D.; Xu, L.; Robinson, D.T.; Tan, W.; Xie, Q.; Guan, H.; Chapman, M.A.; Li, J. Individual tree species classification using low-density airborne multispectral LiDAR data via attribute-aware cross-branch transformer. Remote Sens. Environ. 2024, 315, 114456. [Google Scholar] [CrossRef]
- Hakala, T.; Suomalainen, J.; Kaasalainen, S.; Chen, Y. Full waveform hyperspectral LiDAR for terrestrial laser scanning. Opt. Express 2012, 20, 7119–7127. [Google Scholar] [CrossRef]
- Gong, W.; Sun, J.; Shi, S.; Yang, J.; Du, L.; Zhu, B.; Song, S. Investigating the Potential of Using the Spatial and Spectral Information of Multispectral LiDAR for Object Classification. Sensors 2015, 15, 21989–22002. [Google Scholar] [CrossRef]
- Niu, Z.; Xu, Z.; Sun, G.; Huang, W.; Wang, L.; Feng, M.; Li, W.; He, W.; Gao, S. Design of a new multispectral waveform LiDAR instrument to monitor vegetation. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1506–1510. [Google Scholar] [CrossRef]
- Kukkonen, M.; Maltamo, M.; Korhonen, L.; Packalen, P. Multispectral airborne LiDAR data in the prediction of boreal tree species composition. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3462–3471. [Google Scholar] [CrossRef]
- Chen, X.; Chengming, Y.E.; Li, J.; Chapman, M.A. Quantifying the carbon storage in urban trees using multispectral ALS data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3358–3365. [Google Scholar] [CrossRef]
- Teo, T.A.; Wu, H.M. Analysis of land cover classification using multi-wavelength LiDAR system. Appl. Sci. 2017, 7, 663. [Google Scholar] [CrossRef]
- Wang, Q.; Gu, Y. A discriminative tensor representation model for feature extraction and classification of multispectral LiDAR data. IEEE Trans. Geosci. Remote Sens. 2020, 58, 1568–1586. [Google Scholar] [CrossRef]
- Pan, S.; Guan, H.; Chen, Y.; Yu, Y.; Gonçalves, W.N.; Junior, J.M.; Li, J. Land-cover classification of multispectral LiDAR data using CNN with optimized hyper-parameters. ISPRS J. Photogramm. Remote Sens. 2020, 166, 241–254. [Google Scholar] [CrossRef]
- Zou, X.; Zhao, G.; Li, J.; Yang, Y.; Fang, Y. 3D land cover classification based on multispectral LiDAR point clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 2016, XLI-B1, 741–747. [Google Scholar] [CrossRef]
- Wichmann, V.; Bremer, M.; Lindenberger, J.; Rutzinger, M.; Georges, C.; Petrini-Monteferri, F. Evaluating the potential of multispectral airborne lidar for topographic mapping and land cover classification. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, II-3/W5, 113–119. [Google Scholar] [CrossRef]
- Weinmann, M.; Jutzi, B.; Mallet, C. Semantic 3D scene interpretation: A framework combining optimal neighborhood size selection with relevant features. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, II-3, 181–188. [Google Scholar] [CrossRef]
- Jing, Z.; Guan, H.; Zhao, P.; Li, D.; Yu, Y.; Zang, Y.; Wang, H.; Li, J. Multispectral LiDAR point cloud classification using SE-PointNet++. Remote Sens. 2021, 13, 2516. [Google Scholar] [CrossRef]
- Shi, S.; Tang, X.; Chen, B.; Chen, B.; Xu, Q.; Bi, S.; Gong, W. Point cloud data processing optimization in spectral and spatial dimensions based on multispectral LiDAR for urban single-wood extraction. ISPRS Int. J. Geoinf. 2023, 12, 90. [Google Scholar] [CrossRef]
- Dovrat, O.; Lang, I.; Avidan, S. Learning to sample. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA; 2019; pp. 2760–2769. [Google Scholar]
- Lang, I.; Manor, A.; Avidan, S. Samplenet: Differentiable point cloud sampling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)), Seattle, DC, USA; 2020; pp. 7578–7588. [Google Scholar]
- Potamias, R.A.; Bouritsas, G.; Zafeiriou, S. Revisiting point cloud simplification: A learnable feature preserving approach. In Proceedings of the European Conference on Computer Vision; Springer Nature: Cham, Switzerland, 2022; pp. 586–603. [Google Scholar] [CrossRef]
- Yang, Y.; Wang, A.; Bu, D.; Feng, Z.; Liang, J. AS-Net: An attention-aware downsampling network for point clouds oriented to classification tasks. J. Vis. Commun. Image Represent. 2022, 89, 103639. [Google Scholar] [CrossRef]
- Ma, G.; Wei, H. A Novel Sketch-Based Framework Utilizing Contour Cues for Efficient Point Cloud Registration. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–16. [Google Scholar] [CrossRef]
- Kong, G.; Zhang, C.; Fan, H. Large-Scale 3-D Building Reconstruction in LoD2 from ALS Point Clouds. IEEE Geosci. Remote Sens. Lett. 2025, 22, 1–5. [Google Scholar] [CrossRef]
- Lu, D.; Zhao, R.; Xu, L.; Zhou, J.; Gao, K.; Gong, Z.; Zhang, D. 3D-UMamba: 3D U-Net with state space model for semantic segmentation of multi-source LiDAR point clouds. Int. J. Appl. Earth Obs. Geoinf. 2025, 136, 104401. [Google Scholar] [CrossRef]
- Yao, J.; Zhang, B.; Li, C.; Hong, D.; Chanussot, J. Extended Vision Transformer (ExViT) for Land Use and Land Cover Classification: A Multimodal Deep Learning Framework. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–15. [Google Scholar] [CrossRef]
- Kia, H.Z.; Choi, Y.; Nelson, D.; Park, J.; Pouyaei, A. Large eddy simulation of sneeze plumes and particles in a poorly ventilated outdoor air condition: A case study of the University of Houston main campus. Sci. Total Environ. 2023, 891, 164694. [Google Scholar] [CrossRef]
- Prasad, S.; Saux, B.L.; Yokoya, N.; Hansch, R. 2018 IEEE GRSS Data Fusion Challenge–Fusion of Multispectral LiDAR and Hyperspectral Data; IEEE Dataport: Hoston, MA, USA, 2020. [Google Scholar]
- Cloud Compare Team. Cloud Compare (Version 2.13.2). 2022. [Software]. Available online: http://www.cloudcompare.org/ (accessed on 1 July 2025).
- Fernandez-Diaz, J.C.; Carter, W.E.; Glennie, C.; Shrestha, R.L.; Pan, Z.; Ekhtari, N.; Singhania, A.; Hauser, D.; Sartori, M. Capability assessment and performance metrics for the Titan multispectral mapping lidar. Remote Sens. 2016, 8, 936. [Google Scholar] [CrossRef]
- Luo, B.; Yang, J.; Song, S.; Shi, S.; Gong, W.; Wang, A.; Du, L. Target classification of similar spatial characteristics in complex urban areas by using multispectral LiDAR. Remote Sens. 2022, 14, 238. [Google Scholar] [CrossRef]
- Wang, Q.W.; Gu, Y.F.; Yang, M.; Wang, C. Multi-attribute smooth graph convolutional network for multispectral points classification. Sci. China Technol. Sci. 2021, 64, 2509–2522. [Google Scholar] [CrossRef]
- Rusu, R.B. Semantic 3d object maps for everyday manipulation in human living environments. KI Künstliche Intell. 2010, 24, 345–348. [Google Scholar] [CrossRef]
- Yang, J.; Luo, B.; Gan, R.; Wang, A.; Shi, S.; Du, L. Multiscale adjacency matrix CNN: Learning on multispectral LiDAR point cloud via multiscale local graph convolution. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 855–870. [Google Scholar] [CrossRef]
- Reichler, M.; Taher, J.; Manninen, P.; Kaartinen, H.; Hyyppä, J.; Kukko, A. Semantic segmentation of raw multispectral laser scanning data from urban environments with deep neural networks. ISPRS J. Photogramm. Remote Sens. 2024, 12, 100061. [Google Scholar] [CrossRef]
- Chakravarty, S.; Paikaray, B.; Mishra, R.; Dash, S. Hyperspectral Image Classification using Spectral Angle Mapper. In Proceedings of the IEEE International Women in Engineering (WIE) Conference on Electrical and Computer Engineering (WIECON-ECE), Dhaka, Bangladesh, 4–5 December 2021; pp. 87–90. [Google Scholar] [CrossRef]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. PointNet++: Deep hierarchical feature learning on point sets in a metric space. Proc. Adv. Neural Inf. Process. Syst. 2017, 30, 5099–5108. [Google Scholar]
- Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic graph cnn for learning on point clouds. ACM Trans. Graph. 2019, 38, 1–12. [Google Scholar] [CrossRef]
- Hu, Q.; Yang, B.; Xie, L.; Rosa, S.; Guo, Y.; Wang, Z.; Trigoni, N.; Markham, A. Randla-net: Efficient semantic segmentation of large-scale point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, DC, USA; 2020; pp. 11108–11117. [Google Scholar]
- Thomas, H.; Qi, C.R.; Deschaud, J.-E.; Marcotegui, B.; Goulette, F.; Guibas, L.J. Kpconv: Flexible and deformable convolution for point clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6410–6419. [Google Scholar] [CrossRef]
- Xu, M.; Ding, R.; Zhao, H.; Qi, X. Paconv: Position adaptive convolution with dynamic kernel assembling on point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 19–25 June 2021; pp. 3173–3182. [Google Scholar]
- Zhao, H.; Jiang, L.; Jia, J.; Torr, P.H.S.; Koltun, V. Point transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 16259–16268. [Google Scholar]
- Chen, H.; Cheng, J.; Ruan, X.; Li, J.; Ye, L.; Chu, S.; Cheng, L.; Zhang, K. Satellite remote sensing and bathymetry co-driven deep neural network for coral reef shallow water benthic habitat classification. Int. J. Appl. Earth Obs. Geoinf. 2024, 132, 104054. [Google Scholar] [CrossRef]
- Liu, T.; Ma, T.; Du, P.; Li, D. Semantic segmentation of large-scale point cloud scenes via dual neighborhood feature and global spatial-aware. Int. J. Appl. Earth Obs. Geoinf. 2024, 129, 103862. [Google Scholar] [CrossRef]
- Chen, X.; Mao, J.; Zhao, B.; Tao, W.; Qin, M.; Wu, C. Building contour extraction from hypervoxel growth point cloud surface neighborhood azimuth geometric features. J. Build. Eng. 2025, 101, 111914. [Google Scholar] [CrossRef]
- Wang, Y.; Sun, P.; Chu, W.; Li, Y.; Chen, Y.; Lin, H.; Dong, Z.; Yang, B.; He, C. Efficient multi-modal high-precision semantic segmentation from MLS point cloud without 3D annotation. Int. J. Appl. Earth Obs. Geoinf. 2024, 135, 104243. [Google Scholar] [CrossRef]
- Huang, S.; Hu, Q.; Ai, M.; Zhao, P.; Li, J.; Cui, H.; Wang, S. Weakly supervised 3D point cloud semantic segmentation for architectural heritage using teacher-guided consistency and contrast learning. Autom. Construct. 2024, 168, 105831. [Google Scholar] [CrossRef]
LC Types | Impervious Ground | Grass | Tree | Building | Car | Power Line | Bare Land |
---|---|---|---|---|---|---|---|
Training (%) | 28.9 | 26.2 | 23 | 18.4 | 1.7 | 1.0 | 0.8 |
Testing (%) | 29.1 | 25.7 | 22.5 | 19.1 | 1.9 | 1.0 | 0.7 |
Difference (%) | 0.2 | 0.5 | 0.5 | 0.7 | 0.2 | 0 | 0.1 |
Model | Down-Sampling Method | Per Layer Quantity of Point Cloud /Voxel Size in Down-Sampling | Neighborhood Point Search Method | Quantity of Neighborhood Points |
---|---|---|---|---|
PointNet++ | FPS | 8192, 2048, 512, 128, 32 | SNS + MSG | 16 + 32 |
DGCNN | × | × | KNN | 20 |
RandLA-Net | RS | 8192, 2048, 512, 128, 32 | KNN | 16 |
KPConv | VS | 0.4, 0.8, 1.6, 3.2, 6.4 | SNS | 16 |
PAConv | FPS | 8192, 2048, 512, 128, 32 | KNN | 1, 2, 4, 8, 16 |
Point Transformer | FPS | 8192, 2048, 512, 128, 32 | KNN | 8+16 |
Hyperparameters | |
---|---|
Epoch | 100 |
Batch size | 8 |
Optimizer | AdamW |
Learning rate | The initial rate is 1 × 10−3, and using a cosine annealing decay strategy to train 80 epochs, with the final rate to 1 × 10−5. Training was continued for an additional 20 epochs with a learning rate of 1 × 10−5 |
Loss function | Cross-Entropy and Mask L1 |
Dropout rate | 0.5 |
Structure | OA | AA | Kappa | MIoU | Params (M) | FLOPs (G) |
---|---|---|---|---|---|---|
None | 0.9430 | 0.8999 | 0.9256 | 0.8618 | / | / |
SA | 0.9493 | 0.9196 | 0.9339 | 0.8840 | 0.006 | 0.029 |
SA + 1 × SSA | 0.9462 | 0.9224 | 0.9299 | 0.8841 | 0.018 | 0.064 |
SA + 2 × SSA | 0.9481 | 0.9184 | 0.9323 | 0.8833 | 0.030 | 0.098 |
SA + 3 × SSA | 0.9503 | 0.9229 | 0.9352 | 0.8892 | 0.042 | 0.132 |
SA + 4 × SSA | 0.9493 | 0.9219 | 0.9339 | 0.8864 | 0.054 | 0.166 |
Loss Function | Original Data | SA | SSA |
---|---|---|---|
Average SAM (°) | 0 | 58.41 | 57.1 |
Zero-value transformation rate (%) | 66.67 | 64.90 | / |
Model | Spectral Reconstruction | OA | AA | Kappa | MIoU |
---|---|---|---|---|---|
PointNet++ | None | 0.7800 | 0.4790 | 0.7100 | 0.3930 |
IDW KNN | 0.8147 | 0.5005 | 0.7558 | 0.4220 | |
0.8107 | 0.4879 | 0.7484 | 0.4130 | ||
Our | 0.8280 | 0.5237 | 0.7732 | 0.4515 | |
DGCNN | None | 0.9332 | 0.8685 | 0.9129 | 0.8242 |
IDW KNN | 0.9347 | 0.8512 | 0.9146 | 0.8081 | |
0.9405 | 0.8427 | 0.9206 | 0.8030 | ||
Our | 0.9433 | 0.8720 | 0.9260 | 0.8351 | |
RandLA-Net | None | 0.8548 | 0.7371 | 0.8097 | 0.6696 |
IDW KNN | 0.9143 | 0.7807 | 0.8877 | 0.7302 | |
0.9082 | 0.7702 | 0.8807 | 0.7165 | ||
Our | 0.9141 | 0.7798 | 0.8874 | 0.7291 | |
KPConv | None | 0.9430 | 0.8999 | 0.9256 | 0.8618 |
IDW KNN | 0.9505 | 0.9150 | 0.9354 | 0.8804 | |
0.9496 | 0.9086 | 0.9343 | 0.8821 | ||
Our | 0.9503 | 0.9229 | 0.9352 | 0.8892 | |
PAConv | None | 0.9473 | 0.8975 | 0.9312 | 0.8615 |
IDW KNN | 0.9527 | 0.9172 | 0.9382 | 0.8854 | |
0.9524 | 0.9143 | 0.9377 | 0.8824 | ||
Our | 0.9529 | 0.9327 | 0.9384 | 0.8956 | |
Point Transformer | None | 0.9501 | 0.8913 | 0.9348 | 0.8564 |
IDW KNN | 0.9536 | 0.9109 | 0.9394 | 0.8783 | |
0.9527 | 0.8985 | 0.9382 | 0.8660 | ||
Our | 0.9533 | 0.9169 | 0.9390 | 0.8829 |
Model | Spectral Reconstruction | IOU | ||||||
---|---|---|---|---|---|---|---|---|
Impervious Ground | Grass | Building | Tree | Car | Power Line | Bare Ground | ||
PointNet++ | None | 0.6175 | 0.6687 | 0.5427 | 0.8394 | 0.0824 | 0 | 0 |
IDW | 0.6596 | 0.7434 | 0.6030 | 0.8658 | 0.0820 | 0 | 0 | |
KNN | 0.6621 | 0.7314 | 0.5648 | 0.8631 | 0.0699 | 0 | 0 | |
Our | 0.6575 | 0.7761 | 0.6087 | 0.9145 | 0.2041 | 0 | 0 | |
DGCNN | None | 0.8319 | 0.8358 | 0.9398 | 0.9494 | 0.8594 | 0.6863 | 0.6669 |
IDW | 0.8434 | 0.8409 | 0.9396 | 0.9488 | 0.8614 | 0.6514 | 0.5712 | |
KNN | 0.8556 | 0.8578 | 0.9473 | 0.9516 | 0.8669 | 0.6044 | 0.5376 | |
Our | 0.8566 | 0.8589 | 0.9508 | 0.9578 | 0.8769 | 0.6608 | 0.6841 | |
RandLA-Net | None | 0.6386 | 0.6526 | 0.9205 | 0.9480 | 0.7839 | 0.7434 | 0 |
IDW | 0.7786 | 0.8135 | 0.9387 | 0.9550 | 0.8526 | 0.7729 | 0 | |
KNN | 0.7723 | 0.7895 | 0.9364 | 0.9548 | 0.8059 | 0.7566 | 0 | |
Our | 0.7926 | 0.8022 | 0.9341 | 0.9553 | 0.8262 | 0.7935 | 0 | |
KPConv | None | 0.8481 | 0.8511 | 0.9504 | 0.9734 | 0.9289 | 0.8816 | 0.5993 |
IDW | 0.8719 | 0.8646 | 0.9568 | 0.9740 | 0.9303 | 0.8810 | 0.6843 | |
KNN | 0.8738 | 0.8691 | 0.9607 | 0.9708 | 0.9197 | 0.8651 | 0.7157 | |
Our | 0.8692 | 0.8606 | 0.9587 | 0.9735 | 0.9321 | 0.8787 | 0.7518 | |
PAConv | None | 0.8630 | 0.8648 | 0.9564 | 0.9679 | 0.9110 | 0.8414 | 0.6263 |
IDW | 0.8733 | 0.8728 | 0.9649 | 0.9704 | 0.9228 | 0.8545 | 0.7391 | |
KNN | 0.8828 | 0.8748 | 0.9657 | 0.9706 | 0.9225 | 0.8565 | 0.7030 | |
Our | 0.8728 | 0.8682 | 0.9642 | 0.9730 | 0.9224 | 0.8714 | 0.7969 | |
Point Transformer | None | 0.8765 | 0.8667 | 0.9598 | 0.9717 | 0.9059 | 0.8145 | 0.5995 |
IDW | 0.8814 | 0.8738 | 0.9627 | 0.9705 | 0.9112 | 0.8071 | 0.7415 | |
KNN | 0.8818 | 0.8718 | 0.9630 | 0.9688 | 0.8977 | 0.8066 | 0.6722 | |
Our | 0.8797 | 0.8746 | 0.9596 | 0.9700 | 0.9010 | 0.8321 | 0.7636 |
Different Neighborhood Scales K | OA | AA | Kappa | MIoU | ||
---|---|---|---|---|---|---|
3 | 0.9493 | 0.9141 | 0.9283 | 0.8783 | ||
6 | 0.9503 | 0.9229 | 0.9352 | 0.8892 | ||
12 | 0.9501 | 0.9221 | 0.9349 | 0.8874 | ||
18 | 0.9494 | 0.9193 | 0.9340 | 0.8857 | ||
32 | 0.9488 | 0.9155 | 0.9321 | 0.8795 | ||
IOU | ||||||
Impervious Ground | Grass | Building | Tree | Car | Power line | Bare ground |
0.8671 | 0.8600 | 0.9594 | 0.9709 | 0.9248 | 0.8778 | 0.6983 |
0.8692 | 0.8606 | 0.9587 | 0.9735 | 0.9321 | 0.8787 | 0.7518 |
0.8665 | 0.8621 | 0.9598 | 0.9735 | 0.9309 | 0.8764 | 0.7425 |
0.8673 | 0.8592 | 0.9575 | 0.9731 | 0.9302 | 0.8786 | 0.7339 |
0.8651 | 0.8583 | 0.9586 | 0.9709 | 0.9238 | 0.8623 | 0.7179 |
Loss Function | OA | AA | Kappa | MIoU |
---|---|---|---|---|
None | 0.9493 | 0.9187 | 0.9339 | 0.8825 |
Mask L1 Loss | 0.9503 | 0.9229 | 0.9352 | 0.8892 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhou, G.; Qi, H.; Shi, S.; Bi, S.; Tang, X.; Gong, W. Spatial–Spectral Feature Fusion and Spectral Reconstruction of Multispectral LiDAR Point Clouds by Attention Mechanism. Remote Sens. 2025, 17, 2411. https://doi.org/10.3390/rs17142411
Zhou G, Qi H, Shi S, Bi S, Tang X, Gong W. Spatial–Spectral Feature Fusion and Spectral Reconstruction of Multispectral LiDAR Point Clouds by Attention Mechanism. Remote Sensing. 2025; 17(14):2411. https://doi.org/10.3390/rs17142411
Chicago/Turabian StyleZhou, Guoqing, Haoxin Qi, Shuo Shi, Sifu Bi, Xingtao Tang, and Wei Gong. 2025. "Spatial–Spectral Feature Fusion and Spectral Reconstruction of Multispectral LiDAR Point Clouds by Attention Mechanism" Remote Sensing 17, no. 14: 2411. https://doi.org/10.3390/rs17142411
APA StyleZhou, G., Qi, H., Shi, S., Bi, S., Tang, X., & Gong, W. (2025). Spatial–Spectral Feature Fusion and Spectral Reconstruction of Multispectral LiDAR Point Clouds by Attention Mechanism. Remote Sensing, 17(14), 2411. https://doi.org/10.3390/rs17142411