remotesensing-logo

Journal Browser

Journal Browser

Special Issue "Semantic Segmentation Algorithms for 3D Point Clouds"

A special issue of Remote Sensing (ISSN 2072-4292).

Deadline for manuscript submissions: 30 June 2023 | Viewed by 9186

Special Issue Editors

3D Optical Metrology (3DOM) Unit, Bruno Kessler Foundation (FBK), 38123 Trento, Italy
Interests: point cloud semantic segmentation; machine learning; deep learning; cultural heritage; 3D modelling

Special Issue Information

Dear Colleagues,

Semantic segmentation often represents a core part of the point cloud processing workflow.  As such, it is currently a hot topic in fields like remote sensing, photogrammetry, and computer vision. Researchers are attempting to build algorithms that automatically assign meaning to points, groups of points, and structured meshes. The identification of the different elements composing a 3D scene is a challenging task due to the numerous possible scenarios and data types. In this context, there is still a  lack of generalisable solutions for all distinct scales and scenarios since the semantic definitions differ according to the considered domain. The documented semantic segmentation approaches also aim to advance performances within specific contexts, such as urban, indoor, or heritage, because of the need for context-specific annotated datasets with exact and balanced classes.

So far, no semantic segmentation strategy has proven to be superior to others, and this Special Issue does not necessarily seek to identify one. Instead, it aims to collect critical research works proposing replicable algorithms and approaches on segmentation tasks. Topics may include novel breakthroughs and research, as well as assessments and comparisons of the existing machine and deep learning algorithms at various scales and application domains, such as indoor, outdoor, heritage, forestry, architectural, and urban.

This Special Issue encourages authors to submit research articles, review articles, or application-oriented reports on the following topics (but not limited to):

  • Machine/Deep learningalgorithms for point cloud semantic segmentation; 
  • Instance segmentation;
  • Integration of knowledge-based rules within/after the learning process;
  • Benchmarking;
  • Problems and solutions when dealing with imbalanced classes in a training dataset;
  • Generalisation and transferability;
  • Interpreting, explaining, and visualising deep learning;
  • Best/new loss functions when training deep learning neural networks.

Dr. Eleonora Grilli
Dr. Florent Poux
Dr. Martin Weinmann
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • photogrammetric /LiDAR point cloud
  • semantic segmentation
  • feature engineering
  • instance segmentation
  • classification
  • learning-based approaches
  • knowledge-based rules
  • explainability
  • generalisation

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

Article
Tree Segmentation and Parameter Measurement from Point Clouds Using Deep and Handcrafted Features
Remote Sens. 2023, 15(4), 1086; https://doi.org/10.3390/rs15041086 - 16 Feb 2023
Viewed by 649
Abstract
Accurate measurement of the geometric parameters of trees is a vital part of forest inventory in forestry management. Aerial and terrestrial Light Detection and Ranging (LiDAR) sensors are currently used in forest inventory as an effective and efficient means of forest data collection. [...] Read more.
Accurate measurement of the geometric parameters of trees is a vital part of forest inventory in forestry management. Aerial and terrestrial Light Detection and Ranging (LiDAR) sensors are currently used in forest inventory as an effective and efficient means of forest data collection. Many recent approaches to processing and interpreting this data make use of supervised machine learning algorithms such as Deep Neural Networks (DNNs) due to their advantages in accuracy, robustness and the ability to adapt to new data and environments. In this paper, we develop new approaches to deep-learning-based forest point cloud analysis that address key issues in real applications in forests. Firstly, we develop a point cloud segmentation framework that identifies tree stem points in individual trees and is designed to improve performance when labelled training data are limited. To improve point cloud representation learning, we propose a handcrafted point cloud feature for semantic segmentation which plays a complementary role with DNNs in semantics extraction. Our handcrafted feature can be integrated with DNNs to improve segmentation performance. Additionally, we combine this feature with a semi-supervised and cross-dataset training process to effectively leverage unlabelled point cloud data during training. Secondly, we develop a supervised machine learning framework based on Recurrent Neural Networks (RNNs) that directly estimates the geometric parameters of individual tree stems (via a stacked cylinder model) from point clouds in a data-driven process, without the need for a separate procedure for model-fitting on points. The use of a one-stage deep learning algorithm for this task makes the process easily adaptable to new environments and datasets. To evaluate our methods for both the segmentation and parameter estimation tasks, we use four real-world datasets of different tree species collected using aerial and terrestrial LiDAR. For the segmentation task, we extensively evaluate our method on the three different settings of supervised, semi-supervised, and cross-dataset learning, and the experimental results indicate that both our handcrafted point cloud feature and our semi-supervised and cross-dataset learning framework can significantly improve tree segmentation performance under all three settings. For the tree parameter estimation task, our DNN-based method performs comparably to well-established traditional methods and opens up new avenues for DNN-based tree parameter estimation. Full article
(This article belongs to the Special Issue Semantic Segmentation Algorithms for 3D Point Clouds)
Show Figures

Figure 1

Article
Data Preparation Impact on Semantic Segmentation of 3D Mobile LiDAR Point Clouds Using Deep Neural Networks
Remote Sens. 2023, 15(4), 982; https://doi.org/10.3390/rs15040982 - 10 Feb 2023
Viewed by 867
Abstract
Currently, 3D point clouds are being used widely due to their reliability in presenting 3D objects and accurately localizing them. However, raw point clouds are unstructured and do not contain semantic information about the objects. Recently, dedicated deep neural networks have been proposed [...] Read more.
Currently, 3D point clouds are being used widely due to their reliability in presenting 3D objects and accurately localizing them. However, raw point clouds are unstructured and do not contain semantic information about the objects. Recently, dedicated deep neural networks have been proposed for the semantic segmentation of 3D point clouds. The focus has been put on the architecture of the network, while the performance of some networks, such as Kernel Point Convolution (KPConv), shows that the way data are presented at the input of the network is also important. Few prior works have studied the impact of using data preparation on the performance of deep neural networks. Therefore, our goal was to address this issue. We propose two novel data preparation methods that are compatible with typical density variations in outdoor 3D LiDAR point clouds. We also investigated two already existing data preparation methods to show their impact on deep neural networks. We compared the four methods with a baseline method based on point cloud partitioning in PointNet++. We experimented with two deep neural networks: PointNet++ and KPConv. The results showed that using any of the proposed data preparation methods improved the performance of both networks by a tangible margin compared to the baseline. The two proposed novel data preparation methods achieved the best results among the investigated methods for both networks. We noticed that, for datasets containing many classes with widely varying sizes, the KNN-based data preparation offered superior performance compared to the Fixed Radius (FR) method. Moreover, this research allowed identifying guidelines to select meaningful downsampling and partitioning of large-scale outdoor 3D LiDAR point clouds at the input of deep neural networks. Full article
(This article belongs to the Special Issue Semantic Segmentation Algorithms for 3D Point Clouds)
Show Figures

Graphical abstract

Article
Rethinking Design and Evaluation of 3D Point Cloud Segmentation Models
Remote Sens. 2022, 14(23), 6049; https://doi.org/10.3390/rs14236049 - 29 Nov 2022
Viewed by 536
Abstract
Currently, the use of 3D point clouds is rapidly increasing in many engineering fields, such as geoscience and manufacturing. Various studies have developed intelligent segmentation models providing accurate results, while only a few of them provide additional insights into the efficiency and robustness [...] Read more.
Currently, the use of 3D point clouds is rapidly increasing in many engineering fields, such as geoscience and manufacturing. Various studies have developed intelligent segmentation models providing accurate results, while only a few of them provide additional insights into the efficiency and robustness of their proposed models. The process of segmentation in the image domain has been studied to a great extent and the research findings are tremendous. However, the segmentation analysis with point clouds is considered particularly challenging due to their unordered and irregular nature. Additionally, solving downstream tasks with 3D point clouds is computationally inefficient, as point clouds normally consist of thousands or millions of points sparsely distributed in 3D space. Thus, there is a significant need for rigorous evaluation of the design characteristics of segmentation models, to be effective and practical. Consequently, in this paper, an in-depth analysis of five fundamental and representative deep learning models for 3D point cloud segmentation is presented. Specifically, we investigate multiple experimental dimensions, such as accuracy, efficiency, and robustness in part segmentation (ShapeNet) and scene segmentation (S3DIS), to assess the effective utilization of the models. Moreover, we create a correspondence between their design properties and experimental properties. For example, we show that convolution-based models that incorporate adaptive weight or position pooling local aggregation operations achieve superior accuracy and robustness to point-wise MLPs, while the latter ones show higher efficiency in time and memory allocation. Our findings pave the way for an effective 3D point cloud segmentation model selection and enlighten the research on point clouds and deep learning. Full article
(This article belongs to the Special Issue Semantic Segmentation Algorithms for 3D Point Clouds)
Show Figures

Figure 1

Article
PIIE-DSA-Net for 3D Semantic Segmentation of Urban Indoor and Outdoor Datasets
Remote Sens. 2022, 14(15), 3583; https://doi.org/10.3390/rs14153583 - 26 Jul 2022
Cited by 1 | Viewed by 1003
Abstract
In this paper, a 3D semantic segmentation method is proposed, in which a novel feature extraction framework is introduced assembling point initial information embedding (PIIE) and dynamic self-attention (DSA)—named PIIE-DSA-net. Ideal segmentation accuracy is a challenging task, since the sparse, irregular and disordered [...] Read more.
In this paper, a 3D semantic segmentation method is proposed, in which a novel feature extraction framework is introduced assembling point initial information embedding (PIIE) and dynamic self-attention (DSA)—named PIIE-DSA-net. Ideal segmentation accuracy is a challenging task, since the sparse, irregular and disordered structure of point cloud. Currently, taking into account both low-level features and deep features of the point cloud is the more reliable and widely used feature extraction method. Since the asymmetry between the length of the low-level features and deep features, most methods cannot reliably extract and fuse the features as expected and obtain ideal segmentation results. Our PIIE-DSA-net first introduced the PIIE module to maintain the low-level initial point-cloud position and RGB information (optional), and we combined them with deep features extracted by the PAConv backbone. Secondly, we proposed a DSA module by using a learnable weight transformation tensor to transform the combined PIIE features and following a self-attention structure. In this way, we obtain optimized fused low-level and deep features, which is more efficient for segmentation. Experiments show that our PIIE-DSA-net is ranked at least in the top seventh among the most recent published state-of-art methods on the indoor dataset and also made a great improvement than original PAConv on outdoor datasets. Full article
(This article belongs to the Special Issue Semantic Segmentation Algorithms for 3D Point Clouds)
Show Figures

Graphical abstract

Article
A Prior Level Fusion Approach for the Semantic Segmentation of 3D Point Clouds Using Deep Learning
Remote Sens. 2022, 14(14), 3415; https://doi.org/10.3390/rs14143415 - 16 Jul 2022
Cited by 2 | Viewed by 1390
Abstract
Three-dimensional digital models play a pivotal role in city planning, monitoring, and sustainable management of smart and Digital Twin Cities (DTCs). In this context, semantic segmentation of airborne 3D point clouds is crucial for modeling, simulating, and understanding large-scale urban environments. Previous research [...] Read more.
Three-dimensional digital models play a pivotal role in city planning, monitoring, and sustainable management of smart and Digital Twin Cities (DTCs). In this context, semantic segmentation of airborne 3D point clouds is crucial for modeling, simulating, and understanding large-scale urban environments. Previous research studies have demonstrated that the performance of 3D semantic segmentation can be improved by fusing 3D point clouds and other data sources. In this paper, a new prior-level fusion approach is proposed for semantic segmentation of large-scale urban areas using optical images and point clouds. The proposed approach uses image classification obtained by the Maximum Likelihood Classifier as the prior knowledge for 3D semantic segmentation. Afterwards, the raster values from classified images are assigned to Lidar point clouds at the data preparation step. Finally, an advanced Deep Learning model (RandLaNet) is adopted to perform the 3D semantic segmentation. The results show that the proposed approach provides good results in terms of both evaluation metrics and visual examination with a higher Intersection over Union (96%) on the created dataset, compared with (92%) for the non-fusion approach. Full article
(This article belongs to the Special Issue Semantic Segmentation Algorithms for 3D Point Clouds)
Show Figures

Figure 1

Article
Construction of a Semantic Segmentation Network for the Overhead Catenary System Point Cloud Based on Multi-Scale Feature Fusion
Remote Sens. 2022, 14(12), 2768; https://doi.org/10.3390/rs14122768 - 09 Jun 2022
Cited by 2 | Viewed by 1143
Abstract
Accurate semantic segmentation results of the overhead catenary system (OCS) are significant for OCS component extraction and geometric parameter detection. Actually, the scenes of OCS are complex, and the density of point cloud data obtained through Light Detection and Ranging (LiDAR) scanning is [...] Read more.
Accurate semantic segmentation results of the overhead catenary system (OCS) are significant for OCS component extraction and geometric parameter detection. Actually, the scenes of OCS are complex, and the density of point cloud data obtained through Light Detection and Ranging (LiDAR) scanning is uneven due to the character difference of OCS components. However, due to the inconsistent component points, it is challenging to complete better semantic segmentation of the OCS point cloud with the existing deep learning methods. Therefore, this paper proposes a point cloud multi-scale feature fusion refinement structure neural network (PMFR-Net) for semantic segmentation of the OCS point cloud. The PMFR-Net includes a prediction module and a refinement module. The innovations of the prediction module include the double efficient channel attention module (DECA) and the serial hybrid domain attention (SHDA) structure. The point cloud refinement module (PCRM) is used as the refinement module of the network. DECA focuses on detail features; SHDA strengthens the connection of contextual semantic information; PCRM further refines the segmentation results of the prediction module. In addition, this paper created and released a new dataset of the OCS point cloud. Based on this dataset, the overall accuracy (OA), F1-score, and mean intersection over union (MIoU) of PMFR-Net reached 95.77%, 93.24%, and 87.62%, respectively. Compared with four state-of-the-art (SOTA) point cloud deep learning methods, the comparative experimental results showed that PMFR-Net achieved the highest accuracy and the shortest training time. At the same time, PMFR-Net segmentation performance on S3DIS public dataset is better than the other four SOTA segmentation methods. In addition, the effectiveness of DECA, SHDA structure, and PCRM was verified in the ablation experiment. The experimental results show that this network could be applied to practical applications. Full article
(This article belongs to the Special Issue Semantic Segmentation Algorithms for 3D Point Clouds)
Show Figures

Graphical abstract

Review

Jump to: Research, Other

Review
Three-Dimensional Point Cloud Semantic Segmentation for Cultural Heritage: A Comprehensive Review
Remote Sens. 2023, 15(3), 548; https://doi.org/10.3390/rs15030548 - 17 Jan 2023
Viewed by 887
Abstract
In the cultural heritage field, point clouds, as important raw data of geomatics, are not only three-dimensional (3D) spatial presentations of 3D objects but they also have the potential to gradually advance towards an intelligent data structure with scene understanding, autonomous cognition, and [...] Read more.
In the cultural heritage field, point clouds, as important raw data of geomatics, are not only three-dimensional (3D) spatial presentations of 3D objects but they also have the potential to gradually advance towards an intelligent data structure with scene understanding, autonomous cognition, and a decision-making ability. The approach of point cloud semantic segmentation as a preliminary stage can help to realize this advancement. With the demand for semantic comprehensibility of point cloud data and the widespread application of machine learning and deep learning approaches in point cloud semantic segmentation, there is a need for a comprehensive literature review covering the topics from the point cloud data acquisition to semantic segmentation algorithms with application strategies in cultural heritage. This paper first reviews the current trends of acquiring point cloud data of cultural heritage from a single platform with multiple sensors and multi-platform collaborative data fusion. Then, the point cloud semantic segmentation algorithms are discussed with their advantages, disadvantages, and specific applications in the cultural heritage field. These algorithms include region growing, model fitting, unsupervised clustering, supervised machine learning, and deep learning. In addition, we summarized the public benchmark point cloud datasets related to cultural heritage. Finally, the problems and constructive development trends of 3D point cloud semantic segmentation in the cultural heritage field are presented. Full article
(This article belongs to the Special Issue Semantic Segmentation Algorithms for 3D Point Clouds)
Show Figures

Figure 1

Other

Jump to: Research, Review

Technical Note
SVASeg: Sparse Voxel-Based Attention for 3D LiDAR Point Cloud Semantic Segmentation
Remote Sens. 2022, 14(18), 4471; https://doi.org/10.3390/rs14184471 - 07 Sep 2022
Cited by 4 | Viewed by 1221
Abstract
3D LiDAR has become an indispensable sensor in autonomous driving vehicles. In LiDAR-based 3D point cloud semantic segmentation, most voxel-based 3D segmentors cannot efficiently capture large amounts of context information, resulting in limited receptive fields and limiting their performance. To address this problem, [...] Read more.
3D LiDAR has become an indispensable sensor in autonomous driving vehicles. In LiDAR-based 3D point cloud semantic segmentation, most voxel-based 3D segmentors cannot efficiently capture large amounts of context information, resulting in limited receptive fields and limiting their performance. To address this problem, a sparse voxel-based attention network is introduced for 3D LiDAR point cloud semantic segmentation, termed SVASeg, which captures large amounts of context information between voxels through sparse voxel-based multi-head attention (SMHA). The traditional multi-head attention cannot directly be applied to the non-empty sparse voxels. To this end, a hash table is built according to the incrementation of voxel coordinates to lookup the non-empty neighboring voxels of each sparse voxel. Then, the sparse voxels are grouped into different groups, and each group corresponds to a local region. Afterwards, position embedding, multi-head attention and feature fusion are performed for each group to capture and aggregate the context information. Based on the SMHA module, the SVASeg can directly operate on the non-empty voxels, maintaining a comparable computational overhead to the convolutional method. Extensive experimental results on the SemanticKITTI and nuScenes datasets show the superiority of SVASeg. Full article
(This article belongs to the Special Issue Semantic Segmentation Algorithms for 3D Point Clouds)
Show Figures

Figure 1

Back to TopTop