remotesensing-logo

Journal Browser

Journal Browser

Point Cloud Processing with Machine Learning

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: 25 November 2024 | Viewed by 3146

Special Issue Editors


E-Mail Website
Guest Editor
1. Geodesyand Geospatial Engineering, Institute of Civil and Environmental Engineering, Departmentof Engineering, Faculty of Science, Technology and Medicine (FSTM), Universityof Luxembourg, 6 Rue RichardCoudenhove-Kalergi, 1359 Kirchberg, Luxembourg
2. Institute forAdvanced Studies (IAS) Luxembourg, Maison du Savoir 2, avenue de l’Université, L-4365 Esch-sur-Alzette, Luxembourg
Interests: robust statistics; data science; outlier investigation; machine learning; deep learning; pattern recognition; statistical modeling; feature extraction; remote sensing; point cloud processing; forest monitoring; precision agriculture
Institute for Creative Technologies, University of Southern California, Los Angeles, CA 90007, USA
Interests: point cloud segmentation; synthetic training data; deep learning; photogrammetry; LiDAR

E-Mail Website
Guest Editor
Computer Vision Group, School of Computation, Information and Technology, Technical University of Munich, 80333 Munich, Germany
Interests: point cloud processing; mobile mapping; 3D vision; photogrammetry and remote sensing; robotics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Geodesy and Geospatial Engineering, Institute of Civil and Environmental Engineering, Department of Engineering, Faculty of Science, Technology and Medicine (FSTM), University of Luxembourg, L-1359 Luxembourg, Luxembourg
Interests: 3D city modeling; BIM; object classification; segmentation; precise agricultural; forest monitoring; machine learning; big data analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

LiDAR (Light Detection and Ranging) and photogrammetry systems have been recognized as two of the most suitable means of remote sensing, acquiring large-scale three-dimensional (3D; x, y, z; a trio) geo-referenced points known as point clouds. On the one hand, recent advances in LiDAR technologies produce dense point clouds that can digitize objects and environments with a millimeter-level of accuracy. Aerial photogrammetry, on the other hand, provides a cost-effective solution for surveying a large terrain of interest with a centimeter-level of accuracy.  Regardless, point clouds are one of the most promising data types which have been used for 3D object representation, classification, segmentation, and modeling in recent years. They provide detailed 3D geometry and spatial information of objects. One of the major challenges of automatizing complex and non-repetitive tasks is data processing. Processing point clouds with complex topology is not trivial as the 3D points are usually incomplete, sparse, unorganized, and have variable point density. Additionally, the presence of outliers and noise is a common phenomenon that makes processing more challenging. Point clouds are often accompanied with huge volumes; hence, they require automatic and efficient processing for reliable–robust analysis and for precise decision making. 

Machine learning (ML) techniques (usually classified as supervised, unsupervised, and reinforcement learning), classical (e.g., regression and support vector machine) to advanced (e.g., ensemble learning; LightGBM; and XGBoost), have been used for point clouds processing. In recent years, a branch of the most advanced ML methods, deep learning (DL), has shown unprecedented success for objects segmentation, classification, and detection in point clouds using different forms of Artificial Neural Networks (ANNs). For this Special Issue, we encourage potential authors to submit novel/improved methods and/or algorithms of ML and DL to benefit the related communities of scientists, researchers, and industry.

This Special Issue aims to show the advantages and limitations of different ML algorithms (including deep learning) in point cloud processing (e.g., objects classification, segmentation, detection, visualization, and modeling) for various fields of applications, such as object modeling, visualization, feature extraction, digital twins solutions, scan-to-BIM, infrastructure (e.g., building, transportation, road-corridor) monitoring, robotics, autonomous driving, forest monitoring, environment, and smart agriculture.

  • Object classification, segmentation, detection, monitoring, and change detection in road environment, transportation (e.g., tunnels and bridges), and buildings. 
  • Object detection, classification, and scene perception for autonomous vehicles and robots.
  • Estimation of metrics of forest inventories, such as individual tree height, diameter of breast height, and stem and canopy modeling.
  • Feature extraction for point cloud processing in various applications of city modeling, as well as environmental and agricultural monitoring.
  • Multiple sensors (LiDAR, optical sensor, IMU, etc.,) modeling and cross-modality integration.
  • Deep learning-based methods for SLAM systems.

Dr. Abdul Awal Md Nurunnabi
Dr. Meida Chen
Dr. Yan Xia
Prof. Dr. Felicia Norma Rebecca Teferle
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • classification
  • deep learning
  • feature extraction
  • machine learning
  • pattern recognition
  • point cloud
  • remote sensing
  • object detection
  • segmentation
  • visualization
  • scene understanding
  • SLAM systems

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 5026 KiB  
Article
Advanced Feature Learning on Point Clouds Using Multi-Resolution Features and Learnable Pooling
by Kevin Tirta Wijaya, Dong-Hee Paek and Seung-Hyun Kong
Remote Sens. 2024, 16(11), 1835; https://doi.org/10.3390/rs16111835 - 21 May 2024
Viewed by 344
Abstract
Existing point cloud feature learning networks often learn high-semantic point features representing the global context by incorporating sampling, neighborhood grouping, neighborhood-wise feature learning, and feature aggregation. However, this process may result in a substantial loss of granular information due to the sampling operation [...] Read more.
Existing point cloud feature learning networks often learn high-semantic point features representing the global context by incorporating sampling, neighborhood grouping, neighborhood-wise feature learning, and feature aggregation. However, this process may result in a substantial loss of granular information due to the sampling operation and the widely-used max pooling feature aggregation, which neglects information from non-maximum point features. Consequently, the resulting high-semantic point features could be insufficient to represent the local context, hindering the network’s ability to distinguish fine shapes. To address this problem, we propose PointStack, a novel point cloud feature learning network that utilizes multi-resolution feature learning and learnable pooling (LP). PointStack aggregates point features of various resolutions across multiple layers to capture both high-semantic and high-resolution information. The LP function calculates the weighted sum of multi-resolution point features through an attention mechanism with learnable queries, enabling the extraction of all available information. As a result, PointStack can effectively represent both global and local contexts, allowing the network to comprehend both the global structure and local shape details. PointStack outperforms various existing feature learning networks for shape classification and part segmentation on the ScanObjectNN and ShapeNetPart datasets, achieving 87.2% overall accuracy and instance mIoU. Full article
(This article belongs to the Special Issue Point Cloud Processing with Machine Learning)
Show Figures

Figure 1

19 pages, 44429 KiB  
Article
GeoSparseNet: A Multi-Source Geometry-Aware CNN for Urban Scene Analysis
by Muhammad Kamran Afzal, Weiquan Liu, Yu Zang, Shuting Chen, Hafiz Muhammad Rehan Afzal, Jibril Muhammad Adam, Bai Yang, Jonathan Li and Cheng Wang
Remote Sens. 2024, 16(11), 1827; https://doi.org/10.3390/rs16111827 - 21 May 2024
Viewed by 350
Abstract
The convolutional neural networks (CNNs) functioning on geometric learning for the urban large-scale 3D meshes are indispensable because of their substantial, complex, and deformed shape constitutions. To address this issue, we proposed a novel Geometry-Aware Multi-Source Sparse-Attention CNN (GeoSparseNet) for the urban large-scale [...] Read more.
The convolutional neural networks (CNNs) functioning on geometric learning for the urban large-scale 3D meshes are indispensable because of their substantial, complex, and deformed shape constitutions. To address this issue, we proposed a novel Geometry-Aware Multi-Source Sparse-Attention CNN (GeoSparseNet) for the urban large-scale triangular mesh classification task. GeoSparseNet leverages the non-uniformity of 3D meshes to depict both broad flat areas and finely detailed features by adopting the multi-scale convolutional kernels. By operating on the mesh edges to prepare for subsequent convolutions, our method exploits the inherent geodesic connections by utilizing the Large Kernel Attention (LKA) based Pooling and Unpooling layers to maintain the shape topology for accurate classification predictions. Learning which edges in a mesh face to collapse, GeoSparseNet establishes a task-oriented process where the network highlights and enhances crucial features while eliminating unnecessary ones. Compared to previous methods, our innovative approach outperforms them significantly by directly processing extensive 3D mesh data, resulting in more discerning feature maps. We achieved an accuracy rate of 87.5% when testing on an urban large-scale model dataset of the Australian city of Adelaide. Full article
(This article belongs to the Special Issue Point Cloud Processing with Machine Learning)
Show Figures

Figure 1

22 pages, 7517 KiB  
Article
Hybrid 3D Reconstruction of Indoor Scenes Integrating Object Recognition
by Mingfan Li, Minglei Li, Li Xu and Mingqiang Wei
Remote Sens. 2024, 16(4), 638; https://doi.org/10.3390/rs16040638 - 8 Feb 2024
Viewed by 812
Abstract
Indoor 3D reconstruction is particularly challenging due to complex scene structures involving object occlusion and overlap. This paper presents a hybrid indoor reconstruction method that segments the room point cloud into internal and external components, and then reconstructs the room shape and the [...] Read more.
Indoor 3D reconstruction is particularly challenging due to complex scene structures involving object occlusion and overlap. This paper presents a hybrid indoor reconstruction method that segments the room point cloud into internal and external components, and then reconstructs the room shape and the indoor objects in different ways. We segment the room point cloud into internal and external points based on the assumption that the room shapes are composed of some large external planar structures. For the external, we seek for an appropriate combination of intersecting faces to obtain a lightweight polygonal surface model. For the internal, we define a set of features extracted from the internal points and train a classification model based on random forests to recognize and separate indoor objects. Then, the corresponding computer aided design (CAD) models are placed in the target positions of the indoor objects, converting the reconstruction into a model fitting problem. Finally, the indoor objects and room shapes are combined to generate a complete 3D indoor model. The effectiveness of this method is evaluated on point clouds from different indoor scenes with an average fitting error of about 0.11 m, and the performance is validated by extensive comparisons with state-of-the-art methods. Full article
(This article belongs to the Special Issue Point Cloud Processing with Machine Learning)
Show Figures

Graphical abstract

15 pages, 4238 KiB  
Article
A Deep-Learning-Based Method for Extracting an Arbitrary Number of Individual Power Lines from UAV-Mounted Laser Scanning Point Clouds
by Sha Zhu, Qiang Li, Jianwei Zhao, Chunguang Zhang, Guang Zhao, Lu Li, Zhenghua Chen and Yiping Chen
Remote Sens. 2024, 16(2), 393; https://doi.org/10.3390/rs16020393 - 19 Jan 2024
Cited by 1 | Viewed by 879
Abstract
In recent years, laser scanners integrated with Unmanned Aerial Vehicles (UAVs) have exhibited great potential in conducting power line inspections in harsh environments. The point clouds collected for power line inspections have numerous advantages over remote image data. However, point cloud-based individual power [...] Read more.
In recent years, laser scanners integrated with Unmanned Aerial Vehicles (UAVs) have exhibited great potential in conducting power line inspections in harsh environments. The point clouds collected for power line inspections have numerous advantages over remote image data. However, point cloud-based individual power line extraction, which is a crucial technology required for power line inspections, still poses several challenges such as massive 3D points, imbalanced category points, etc. Moreover, in various power line scenarios, previous studies often require manual setup and careful adjustment of different thresholds to separate different power lines, which is inefficient for practical applications. To handle these challenges, in this paper, we propose a multi-branch network to automatically extract an arbitrary number of individual power lines from point clouds collected by UAV-based laser scanners. Specifically, to handle the massive 3D point clouds in complex outdoor scenarios, we propose to leverage deep neural network for efficient and rapid feature extraction in large-scale point clouds. To mitigate imbalanced data quantities across different categories, we propose to design a weighted cross-entropy loss function to measure the varying importance of each category. To achieve the effective extraction of an arbitrary number of power lines, we propose leveraging a loss function to learn the discriminative features that can differentiate the points belonging to different power lines. Once the discriminative features are learned, the Mean Shift method can distinguish the individual power lines by clustering without supervision. The evaluations are executed on two datasets, which are acquired at different locations with UAV-mounted laser scanners. The proposed method has been thoroughly tested and evaluated, and the results and discussions confirm its outstanding ability to extract an arbitrary number of individual power lines in point clouds. Full article
(This article belongs to the Special Issue Point Cloud Processing with Machine Learning)
Show Figures

Figure 1

Back to TopTop