Classification of Typical Static Objects in Road Scenes Based on LO-Net
Abstract
:1. Introduction
- This paper proposes a high-accuracy classification network (LO-Net) for static object point cloud classification in road scenes. The network is mainly composed of three modules: the GraphConv module, the joint module, and the joint point cloud spatial pyramid pooling (J-PSPP) module. The first two modules achieve local feature aggregation and feature learning across multiple layers. Inspired by the spatial pyramid concept in 2D images, the third module introduces the point cloud joint spatial pyramid pooling. It enhances the model’s robustness and improves its classification performance by processing features through multi-scale joint pooling.
- The paper introduces the Road9 road scene dataset based on the Mobile LiDAR system. Unlike public datasets, Road9 contains a certain level of noise compared to synthetic complete datasets, and its point cloud model is more realistic. Additionally, various experiments have been conducted to demonstrate the effectiveness, robustness, and generalization capability of the model.
2. Related Work
2.1. Methods Based on Multiple Views
2.2. Methods Based on Voxelization
2.3. Methods Based on Graph Convolution
2.4. Methods Based on Point
3. Methodology
3.1. Set Abstraction Module
3.2. GraphConv Module
3.3. Unite_Module
3.4. J-PSPP Module
3.5. LO-Net Overall Network Architecture
4. Experiment
4.1. Preparation for Experiment
4.2. Network Parameter Settings
4.3. Experimental Dataset
- Public datasets: To explore the feasibility and robustness of the improved deep learning network model in classifying typical features in road scenes, this paper decided to conduct experiments using internationally recognized standard datasets, namely ModelNet Dataset [10] and the Sydney Urban Objects Dataset [42]. ModelNet Dataset includes ModelNet10 and ModelNet40. The ModelNet40 public dataset comprises 9843 training models and 2468 test models, totaling 12,311 rigid 3D models. The choice of this dataset is due to its noise-free nature, allowing for an accurate reflection of the feasibility of model improvements. Visualizations of the ModelNet dataset samples are presented in Figure 6. The Sydney Urban Objects Dataset (Suo Dataset) was collected using Velodyne HDL-64E LiDAR scans of various common urban road objects in the Central Business District (CBD) of Sydney, Australia. The dataset includes 631 scans of different object categories, covering vehicles, pedestrians, signs, and trees. This dataset represents a sparse point cloud model with a significant degree of point density unevenness. The selection of this dataset allows for testing the robustness and generalization ability of model improvements. Visualizations of the Sydney Urban Objects Dataset samples are presented in Figure 7.
- Road9 dataset: The Road9 dataset was created by collecting point cloud data from a circular road in Shanyang District, Jiaozuo City, Henan Province, using the SSW-3 mobile LiDAR system. As shown in Figure 8, it is the overall visualization of the study area’s remote sensing image Figure 8a and the original point cloud Figure 8b. In Figure 8a, the highlighted red section represents the main research segment, depicting a complex road scene.
4.4. Evaluation Index
- Overall accuracy: This refers to the ratio of the number of correctly classified samples to the total number of samples. A higher score indicates better classification performance of the network. The formula is as follows (Equation (10)).
- Mean accuracy: The average of the independent classification accuracies for each category, divided by the number of target categories. A higher score indicates better classification results for each category. The formula is as follows (Equation (11)).
- F1-score: A weighted harmonic mean that takes both recall and precision into account, used for a comprehensive evaluation of network performance. The formula is as follows (Equation (12)).
- Kappa coefficient: This refers to the index used to evaluate the overall classification accuracy of the network model. The larger the value, the stronger the classification performance of the network and the better the classification effect. The calculation formula is shown in Equations (15) and (16).
4.5. Experimental Analysis of Public Dataset
4.5.1. Graph Convolution K-Value Selection Analysis
4.5.2. Analysis of Public Dataset Experiments
4.6. Experimental Analysis of Road9 Dataset
4.7. Ablation Experiment
5. Discussion
- GraphConv conducts learning between adjacent points in the point set, allowing the aggregation of edge features near the central point, greatly absorbing geometric information in the local domain. This enables the network to learn more point cloud features.
- Unite_module integrated after hierarchical feature learning employs upsampling to gradually restore the low-point count layer features to the previous layer, progressively refining the semantic features of each layer and making the features learned at each level more comprehensive.
- J-PSPP pools the final features obtained, using pyramid pooling to learn features from different spatial regions. This combined with joint pooling allows the network to acquire multi-scale and multi-style features that encompass both local and global characteristics.
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
A-PSPP | Point cloud spatial pyramid average pooling |
FN | False negative |
FP | False positive |
FPS | Farthest point sampling |
J-PSPP | Point cloud spatial pyramid joint pooling |
MA | Mean accuracy |
MLP | Multi-layer perceptron |
M-PSPP | Point cloud spatial pyramid maximum pooling |
OA | Overall accuracy |
PSPP | Point cloud spatial pyramid pooling |
SA | Set abstraction |
TP | True positive |
TN | True negative |
References
- Hou, Y.-L.; Hao, X.; Chen, H. A Cognitively Motivated Method for Classification of Occluded Traffic Signs. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 255–262. [Google Scholar] [CrossRef]
- Xiang, M.; An, Y. A Collaborative Monitoring Method for Traffic Situations under Urban Road Emergencies. Appl. Sci. 2023, 13, 1311. [Google Scholar] [CrossRef]
- Tsai, C.-M.; Wang, B.-X. A Freeform Mirror Design of Uniform Illumination in Streetlight from a Split Light Source. IEEE Photon. J. 2018, 10, 1–12. [Google Scholar] [CrossRef]
- Orlowski, A. Smart Cities Concept—Readiness of City Halls as a Measure of Reaching a Smart City Perception. Cybern. Syst. 2021, 52, 313–327. [Google Scholar] [CrossRef]
- Zhang, L.; Guo, Y.; Qian, W.; Wang, W.; Liu, D.; Liu, S. Modelling and online training method for digital twin workshop. Int. J. Prod. Res. 2022, 61, 3943–3962. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the 18th International Conference, Munich, Germany, 5–9 October 2015. [Google Scholar]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; Volume 1, pp. 77–85. [Google Scholar]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. PointNet++: Deep hierarchical feature learning on point sets in a metric space. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Red Hook, NY, USA, 4 December 2017; Curran Associates Inc.: Red Hook, NY, USA, 2017; Volume 1, pp. 5105–5114. [Google Scholar]
- Cheng, M.; Hui, L.; Xie, J.; Yang, J.; Kong, H. Cascaded Non-Local Neural Network for Point Cloud Semantic Segmentation. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 8447–8452. [Google Scholar]
- Lu, T.; Wang, L.; Wu, G. CGA-Net: Category Guided Aggregation for Point Cloud Semantic Segmentation. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Online, 19–25 June 2021; pp. 11688–11697. [Google Scholar]
- Lin, Y.; Vosselman, G.; Cao, Y.; Yang, M.Y. Local and global encoder network for semantic segmentation of Airborne laser scanning point clouds. ISPRS J. Photogramm. Remote Sens. 2021, 176, 151–168. [Google Scholar] [CrossRef]
- Nie, D.; Lan, R.; Wang, L.; Ren, X. Pyramid Architecture for Multi-Scale Processing in Point Cloud Segmentation. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 17263–17273. [Google Scholar]
- Angrish, A.; Bharadwaj, A.; Starly, B. MVCNN++: Computer-Aided Design Model Shape Classification and Retrieval Using Multi-View Convolutional Neural Networks. J. Comput. Inf. Sci. Eng. 2020, 21, 011001. [Google Scholar] [CrossRef]
- Feng, Y.; Zhang, Z.; Zhao, X.; Ji, R.; Gao, Y. GVCNN: Group-View Convolutional Neural Networks for 3D Shape Recognition. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 264–272. [Google Scholar]
- Li, M.; Cao, Y.; Wu, H. Three-dimensional reconstruction for highly reflective diffuse object based on online measurement. Opt. Commun. 2023, 533, 129276. [Google Scholar] [CrossRef]
- Sfikas, K.; Pratikakis, I.; Theoharis, T. Ensemble of PANORAMA-based convolutional neural networks for 3D model classification and retrieval. Comput. Graph. 2018, 71, 208–218. [Google Scholar] [CrossRef]
- Graham, B.; Engelcke, M.; Van Der Maaten, L. 3D semantic segmentation with submanifold sparse convolutional networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 9224–9232. [Google Scholar] [CrossRef]
- Maturana, D.; Scherer, S. VoxNet: A 3D Convolutional Neural Network for real-time object recognition. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 922–928. [Google Scholar]
- Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; Xiao, J. 3D ShapeNets: A deep representation for volumetric shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015. [Google Scholar]
- Chen, H.; Dou, Q.; Yu, L.; Qin, J.; Heng, P.A. VoxResNet: Deep voxelwise residual networks for brain seg-mentation from 3D MR images. NeuroImage 2018, 170, 446–455. [Google Scholar] [CrossRef] [PubMed]
- Riegler, G.; Ulusoy, A.O.; Geiger, A. OctNet: Learning Deep 3D Representations at High Resolutions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6620–6629. [Google Scholar]
- Choy, C.; Gwak, J.; Savarese, S. 4D spatio-temporal ConvNets: Minkowski convolutional neural networks. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 3070–3079. [Google Scholar] [CrossRef]
- Hua, B.-S.; Tran, M.-K.; Yeung, S.-K. Pointwise Convolutional Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 984–993. [Google Scholar]
- Li, W.; Luo, Z.; Xiao, Z.; Chen, Y.; Wang, C.; Li, J. A GCN-Based Method for Extracting Power Lines and Pylons from Airborne LiDAR Data. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5700614. [Google Scholar] [CrossRef]
- Wang, C.; Samari, B.; Siddiqi, K. Local Spectral Graph Convolution for Point Set Feature Learning. In Proceedings of the Eu-ropean Conference on Computer Vision (ECCV), Munich, Germany, 8–14 2018; pp. 52–66. [Google Scholar]
- Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic Graph CNN for Learning on Point Clouds. ACM Trans. Graph. 2019, 38, 1–12. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhou, Z.; David, P.; Yue, X.; Xi, Z.; Gong, B.; Foroosh, H. PolarNet: An Improved Grid Representation for Online LiDAR Point Clouds Semantic Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Lu, Q.; Chen, C.; Xie, W.; Luo, Y. PointNGCNN: Deep convolutional networks on 3D point clouds with neighborhood graph filters. Comput. Graph. 2020, 86, 42–51. [Google Scholar] [CrossRef]
- Liang, Z.; Yang, M.; Deng, L.; Wang, C.; Wang, B. Hierarchical depth wise graph convolutional neural network for 3D semantic segmentation of point clouds. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 8152–8158. [Google Scholar]
- Zhao, Y.; Zhou, F.; Guo, B.; Liu, B. Spatial Temporal Graph Convolution with Graph Structure Self-Learning for Early MCI Detection. In Proceedings of the 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI), Cartagena, Colombia, 18–21 April 2023; pp. 1–5. [Google Scholar]
- Hao, M.; Yu, J.; Zhang, L. Spatial-Temporal Graph Convolution Network for Multichannel Speech Enhancement. In Proceedings of the ICASSP 2022—IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, 23–27 May 2022; pp. 6512–6516. [Google Scholar]
- Cortinhal, T.; Tzelepis, G.; Erdal Aksoy, E. SalsaNext: Fast, uncertainty-aware semantic segmentation of LiDAR point clouds. In Advances in Visual Computing, In Proceedings of the 15th International Symposium, ISVC 2020, San Diego, CA, USA, 5–7 October 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 207–222. [Google Scholar]
- Bai, J.; Xu, H. MSP-Net: Multi-Scale Point Cloud Classification Network. J. Comput. Aided Des. Comput. Graph. 2019, 31, 1917–1924. [Google Scholar]
- Liu, Y.; Fan, B.; Xiang, S.; Pan, C. Relation-shape Convolutional Neural Network for Point Cloud Analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 8895–8904. [Google Scholar]
- Li, R.; Li, X.; Heng, P.-A.; Fu, C.-W. Pointaugment: An Auto-Augmentation Framework for Point Cloud Classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 6378–6387. [Google Scholar]
- Xue, Z.; Zhou, Y.; Du, P. S3Net: Spectral–Spatial Siamese Network for Few-Shot Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5531219. [Google Scholar] [CrossRef]
- Eldar, Y.; Lindenbaum, M.; Porat, M.; Zeevi, Y. The farthest point strategy for progressive image sampling. IEEE Trans. Image Process. 1997, 6, 1305–1315. [Google Scholar] [CrossRef] [PubMed]
- Guo, G.; Wang, H.; Bell, D.; Bi, Y.; Greer, K. KNN Model-Based Approach in Classification. In On the Move to Meaningful Internet Systems 2003: CoopIS, DOA, and ODBASE, In Proceedings of the OTM Confederated International Conferences, CoopIS, DOA, and ODBASE 2003, Catania, Sicily, Italy, 3–7 November 2003; Springer: Berlin/Heidelberg, Germany, 2003; pp. 986–996. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [PubMed]
- De Deuge, M.; Quadros, A.; Hung, C.; Douillard, B. Unsupervised Feature Learning for Classification of Outdoor 3D Scans. In Proceedings of the Australasian Conference on Robotics and Automation (ACRA), Sydney, Australia, 2–4 December 2013. [Google Scholar]
- Cai, S.; Yu, S.; Hui, Z.; Tang, Z. ICSF: An Improved Cloth Simulation Filtering Algorithm for Airborne LiDAR Data Based on Morphological Operations. Forests 2023, 14, 1520. [Google Scholar] [CrossRef]
- Li, K.; Li, Y.; Li, J.; Ren, J.; Hao, D.; Wang, Z. Multi-stage Clustering Segmentation Algorithm for Roadside Objects Based on mobile LiDAR Point Cloud. Geogr. Geo Inf. Sci. 2023, 39, 32–38. [Google Scholar]
- Le, T.; Duan, Y. PointGrid: A Deep Network for 3D Shape Understanding. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 9204–9214. [Google Scholar]
Category | Quantity | Road9 Dataset | SUO Dataset |
---|---|---|---|
0 Street Lamp | 707 | - | |
1 Traffic Light | 205:77 | ||
2 Tree | 970:34 | ||
3 Pole | 170:21 | ||
4 Traffic Sign | 241:51 | ||
5 Garbage Can | 52 | - | |
6 Bus Shelter | 57 | - | |
7 Guardrail | 47 | ||
8 Motor Vehicle | 221:187 |
Parameter | Label | Prediction |
---|---|---|
TP | + | + |
FP | − | + |
FN | + | − |
TN | − | − |
Category | PointNet | PointNet++ | LO-Net |
---|---|---|---|
Airplane | 100 | 100 | 100 |
Bathtub | 80.0 | 88.0 | 92.0 |
Bed | 96.0 | 96.0 | 97.0 |
Bench | 70.0 | 85.0 | 80.0 |
Bookshelf | 91.0 | 90.0 | 94.0 |
Bottle | 94.0 | 95.0 | 94.0 |
Bowl | 90.0 | 90.0 | 90.0 |
Car | 98.0 | 98.0 | 99.0 |
Chair | 97.0 | 92.0 | 97.0 |
Cone | 90.0 | 100 | 95.0 |
Cup | 75.0 | 80.0 | 85.0 |
Curtain | 90.0 | 90.0 | 95.0 |
Desk | 83.7 | 91.0 | 88.4 |
Door | 85.0 | 85.0 | 85.0 |
Dresser | 72.1 | 75.6 | 82.6 |
Flower Pot | 20.0 | 25.0 | 10.0 |
Glass Box | 98.0 | 96.0 | 94.0 |
Guitar | 100 | 97.0 | 100 |
Keyboard | 100.0 | 100.0 | 100 |
Lamp | 90.0 | 90.0 | 97.0 |
Laptop | 100 | 100 | 100 |
Mantel | 94.9 | 97.0 | 97.0 |
Monitor | 95.0 | 99.0 | 100 |
Night Stand | 72.1 | 73.3 | 76.7 |
Person | 95.0 | 90.0 | 90.0 |
Piano | 87.8 | 96.0 | 92.0 |
Plant | 80.0 | 74.0 | 79.0 |
Radio | 75.0 | 80.0 | 80.0 |
Range Hood | 91.0 | 95.0 | 95.0 |
Sink | 70.0 | 85.0 | 90.0 |
Sofa | 97.0 | 96.0 | 96.0 |
Stairs | 85.0 | 95.0 | 95.0 |
Stool | 85.0 | 85.0 | 85.0 |
Table | 84.0 | 72.0 | 78.0 |
Tent | 95.0 | 95.0 | 95.0 |
Toilet | 99.0 | 99.0 | 100 |
Tv Stand | 80.0 | 88.0 | 92.0 |
Vase | 74.7 | 80.0 | 79.0 |
Wardrobe | 70.0 | 80.0 | 85.0 |
Xbox | 90.0 | 75.0 | 80.0 |
Networks | ModelNet40 | ModelNet10 | Sydney Urban Objects | |||
---|---|---|---|---|---|---|
OA (%) | MA (%) | OA (%) | MA (%) | OA (%) | MA (%) | |
PointNet | 88.6 | 86.0 | 91.6 | 91.2 | 67.1 | 66.2 |
PointNet++ | 89.8 | 88.0 | 92.3 | 92.3 | - | - |
PointGrid [45] | 90.1 | 87.4 | - | - | 78.3 | 77.4 |
LO-Net (Ours) | 91.2 | 88.9 | 94.2 | 94.1 | 79.5 | 77.6 |
Networks | Evaluation Index | Street Lamp | Traffic Light | Street Tree | Pole | Traffic Sign | Garbage Can | Bus Shelter | Guardrail | Motor Vehicle |
---|---|---|---|---|---|---|---|---|---|---|
PointNet | Recall | 98.1 | 85.5 | 99.3 | 100 | 91.7 | 100 | 100 | 64.3 | 95.5 |
Precision | 97.7 | 92.8 | 100 | 86.7 | 95.7 | 100 | 100 | 90.0 | 98.4 | |
F1-score | 97.9 | 89.0 | 99.6 | 92.9 | 93.7 | 100 | 100 | 75.0 | 96.9 | |
PointNet++ | Recall | 100 | 77.4 | 100 | 96.2 | 97.2 | 100 | 100 | 78.6 | 97.0 |
Precision | 95.5 | 90.6 | 100 | 94.3 | 94.6 | 100 | 100 | 91.7 | 100 | |
F1-score | 97.7 | 83.5 | 100 | 95.2 | 95.9 | 100 | 100 | 84.6 | 98.5 | |
LO-Net | Recall | 98.6 | 90.3 | 100 | 100 | 98.6 | 100 | 100 | 85.7 | 100 |
Precision | 98.1 | 96.6 | 100 | 91.2 | 98.6 | 100 | 100 | 100 | 100 | |
F1-score | 98.4 | 93.3 | 100 | 95.4 | 98.6 | 100 | 100 | 92.3 | 100 |
Original Labels | Data | Network | Classification Results |
---|---|---|---|
0 (Street Lamp) | PointNet | 3 (Pole) | |
PointNet++ | 1 (Traffic light) | ||
LO-Net | 0 (Street lamp) | ||
1 (Traffic Light) | PointNet | 0 (Street lamp) | |
PointNet++ | 4 (Traffic sign) | ||
LO-Net | 4 (Traffic sign) | ||
6 (Bus Shelter) | PointNet | 6 (Bus shelter) | |
PointNet++ | 6 (Bus shelter) | ||
LO-Net | 6 (Bus shelter) | ||
7 (Guardrail) | PointNet | 3 (Pole) | |
PointNet++ | 1 (Traffic light) | ||
LO-Net | 7 (Guardrail) | ||
8 (Motor Vehicle) | PointNet | 8 (Motor Vehicle) | |
PointNet++ | 8 (Motor Vehicle) | ||
LO-Net | 8 (Motor Vehicle) |
Networks | OA (%) | MA (%) | Kappa |
---|---|---|---|
PointNet | 96.3 | 92.7 | 0.952 |
PointNet++ | 97.2 | 94.0 | 0.964 |
LO-Net | 98.5 | 97.0 | 0.981 |
Max Pooling | Avg Pooling | OA (%) | MA (%) |
---|---|---|---|
√ | 98.0 | 95.6 | |
√ | 97.8 | 95.4 | |
√ | √ | 98.0 | 96.0 |
N/1, N/2, N/4, N/8 | N/1, N/2, N/4, N/8 | 98.4 | 96.3 |
N/1, N/2, N/4, N/8, N/16 | N/1, N/2, N/4, N/8, N/16 | 98.5 | 97.0 |
N/1, N/2, N/4, N/8, N/16 | 98.3 | 96.6 | |
N/1, N/2, N/4, N/8, N/16 | 98.2 | 95.9 | |
N/1, N/2, N/4, N/8, N/16, N/32 | N/1, N/2, N/4, N/8, N/16, N/32 | 98.5 | 96.7 |
GraphConv | Unit_Module | J-PSPP | OA (%) | MA (%) |
---|---|---|---|---|
√ | 97.5 | 95.1 | ||
√ | 97.4 | 94.7 | ||
√ | 97.7 | 95.8 | ||
√ | √ | 98.0 | 96.0 | |
√ | √ | 98.3 | 96.6 | |
√ | √ | 98.0 | 96.3 | |
√ | √ | √ | 98.5 | 97.0 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, Y.; Wu, J.; Liu, H.; Ren, J.; Xu, Z.; Zhang, J.; Wang, Z. Classification of Typical Static Objects in Road Scenes Based on LO-Net. Remote Sens. 2024, 16, 663. https://doi.org/10.3390/rs16040663
Li Y, Wu J, Liu H, Ren J, Xu Z, Zhang J, Wang Z. Classification of Typical Static Objects in Road Scenes Based on LO-Net. Remote Sensing. 2024; 16(4):663. https://doi.org/10.3390/rs16040663
Chicago/Turabian StyleLi, Yongqiang, Jiale Wu, Huiyun Liu, Jingzhi Ren, Zhihua Xu, Jian Zhang, and Zhiyao Wang. 2024. "Classification of Typical Static Objects in Road Scenes Based on LO-Net" Remote Sensing 16, no. 4: 663. https://doi.org/10.3390/rs16040663
APA StyleLi, Y., Wu, J., Liu, H., Ren, J., Xu, Z., Zhang, J., & Wang, Z. (2024). Classification of Typical Static Objects in Road Scenes Based on LO-Net. Remote Sensing, 16(4), 663. https://doi.org/10.3390/rs16040663