Special Issue "Point Cloud Processing in Remote Sensing"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 31 March 2020.

Special Issue Editors

Prof. Dr.-Ing. Wei Yao
E-Mail Website
Guest Editor
Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hong Kong, China
Tel. +852 27664304
Interests: LiDAR; 3D scene perception and analysis; Environmental remote sensing; Sensor fusion
Prof. Francesco Pirotti
E-Mail Website
Guest Editor
Dr. Naoto Yokoya
E-Mail Website
Guest Editor
Mr. Yusheng Xu
E-Mail Website
Guest Editor
Photogrammetry and Remote Sensing, Technische Universität München, 80333 München, Germany
Tel. +49 89 289 22637
Interests: point cloud processing; photogrammetry; computer vision

Special Issue Information

Dear Colleagues,

Point clouds are deemed to be one of the foundational pillars in representing the 3D digital world despite irregular topology among discrete points. Recently, the advancement in sensor technologies that acquire point cloud data for a flexible and scalable geometric representation has paved the way for the development of new ideas, methodologies and solutions in countless remote sensing applications. The state-of-the-art sensors are capable of capturing and describing objects in a scene by using dense point clouds from various platforms (satellite, aerial, UAV, vehicle-borne, backpack, handheld and static terrestrial), perspectives (nadir, oblique and side-view), spectrums (multispectral), and granularity (point density and completeness). Meanwhile, the ever-expanding application areas of point cloud processing have already covered not only conventional domains in geospatial analysis, but also include manufacturing, civil engineering, construction, transportation, ecology, forestry, mechanical engineering and so on.

The Special Issue aims at contributions that focus on processing and utilizing point cloud data acquired from laser scanners and other 3D imaging systems. We are particularly interested in original papers that address innovative techniques for generating, handling and analyzing point cloud data, challenges in dealing with point cloud data in emerging remote sensing applications and developing new applications for point cloud data.

Prof. Dr.-Ing. Wei Yao
Prof. Francesco Pirotti
Prof. Dr. Naoto Yokoya
Dr. Yusheng Xu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Point cloud acquisition from the laser scanner, stereo vision, panoramas, camera phone images, oblique and satellite imagery
  • Deep learning for point cloud processing
  • Point cloud registration and segmentation
  • Feature extraction, object detection, semantic labelling, and change detection
  • Point cloud processing for indoor modelling and BIM
  • Fusion of multimodal point clouds with imagery for object classification and modelling
  • Modeling urban and natural environment from aerial and mobile LiDAR/image-based point clouds
  • Industrial applications with large-scale point clouds
  • High-performance computing for large-scale point clouds

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle
Evaluating Thermal Attribute Mapping Strategies for Oblique Airborne Photogrammetric System AOS-Tx8
Remote Sens. 2020, 12(1), 112; https://doi.org/10.3390/rs12010112 - 30 Dec 2019
Abstract
Thermal imagery is widely used in various fields of remote sensing. In this study, a novel processing scheme is developed to process the data acquired by the oblique airborne photogrammetric system AOS-Tx8 consisting of four thermal cameras and four RGB cameras with the [...] Read more.
Thermal imagery is widely used in various fields of remote sensing. In this study, a novel processing scheme is developed to process the data acquired by the oblique airborne photogrammetric system AOS-Tx8 consisting of four thermal cameras and four RGB cameras with the goal of large-scale area thermal attribute mapping. In order to merge 3D RGB data and 3D thermal data, registration is conducted in four steps: First, thermal and RGB point clouds are generated independently by applying structure from motion (SfM) photogrammetry to both the thermal and RGB imagery. Next, a coarse point cloud registration is performed by the support of georeferencing data (global positioning system, GPS). Subsequently, a fine point cloud registration is conducted by octree-based iterative closest point (ICP). Finally, three different texture mapping strategies are compared. Experimental results showed that the global image pose refinement outperforms the other two strategies at registration accuracy between thermal imagery and RGB point cloud. Potential building thermal leakages in large areas can be fast detected in the generated texture mapping results. Furthermore, a combination of the proposed workflow and the oblique airborne system allows for a detailed thermal analysis of building roofs and facades. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Low Overlapping Point Cloud Registration Using Line Features Detection
Remote Sens. 2020, 12(1), 61; https://doi.org/10.3390/rs12010061 - 23 Dec 2019
Abstract
Modern robotic exploratory strategies assume multi-agent cooperation that raises a need for an effective exchange of acquired scans of the environment with the absence of a reliable global positioning system. In such situations, agents compare the scans of the outside world to determine [...] Read more.
Modern robotic exploratory strategies assume multi-agent cooperation that raises a need for an effective exchange of acquired scans of the environment with the absence of a reliable global positioning system. In such situations, agents compare the scans of the outside world to determine if they overlap in some region, and if they do so, they determine the right matching between them. The process of matching multiple point-cloud scans is called point-cloud registration. Using the existing point-cloud registration approaches, a good match between any two-point-clouds is achieved if and only if there exists a large overlap between them, however, this limits the advantage of using multiple robots, for instance, for time-effective 3D mapping. Hence, a point-cloud registration approach is highly desirable if it can work with low overlapping scans. This work proposes a novel solution for the point-cloud registration problem with a very low overlapping area between the two scans. In doing so, no initial relative positions of the point-clouds are assumed. Most of the state-of-the-art point-cloud registration approaches iteratively match keypoints in the scans, which is computationally expensive. In contrast to the traditional approaches, a more efficient line-features-based point-cloud registration approach is proposed in this work. This approach, besides reducing the computational cost, avoids the problem of high false-positive rate of existing keypoint detection algorithms, which becomes especially significant in low overlapping point-cloud registration. The effectiveness of the proposed approach is demonstrated with the help of experiments. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Pole-Like Street Furniture Segmentation and Classification in Mobile LiDAR Data by Integrating Multiple Shape-Descriptor Constraints
Remote Sens. 2019, 11(24), 2920; https://doi.org/10.3390/rs11242920 - 06 Dec 2019
Abstract
Nowadays, mobile laser scanning is widely used for understanding urban scenes, especially for extraction and recognition of pole-like street furniture, such as lampposts, traffic lights and traffic signs. However, the start-of-art methods may generate low segmentation accuracy in the overlapping scenes, and the [...] Read more.
Nowadays, mobile laser scanning is widely used for understanding urban scenes, especially for extraction and recognition of pole-like street furniture, such as lampposts, traffic lights and traffic signs. However, the start-of-art methods may generate low segmentation accuracy in the overlapping scenes, and the object classification accuracy can be highly influenced by the large discrepancy in instance number of different objects in the same scene. To address these issues, we present a complete paradigm for pole-like street furniture segmentation and classification using mobile LiDAR (light detection and ranging) point cloud. First, we propose a 3D density-based segmentation algorithm which considers two different conditions including isolated furniture and connected furniture in overlapping scenes. After that, a vertical region grow algorithm is employed for component splitting and a new shape distribution estimation method is proposed to obtain more accurate global shape descriptors. For object classification, an integrated shape constraint based on the splitting result of pole-like street furniture (SplitISC) is introduced and integrated into a retrieval procedure. Two test datasets are used to verify the performance and effectiveness of the proposed method. The experimental results demonstrate that the proposed method can achieve better classification results from both sites than the existing shape distribution method. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
A Novel Method for Plane Extraction from Low-Resolution Inhomogeneous Point Clouds and its Application to a Customized Low-Cost Mobile Mapping System
Remote Sens. 2019, 11(23), 2789; https://doi.org/10.3390/rs11232789 - 26 Nov 2019
Abstract
Over the last decade, increasing demands for building interior mapping have brought the challenge of effectively and efficiently acquiring geometric information. Most mobile mapping methods rely on the integration of Simultaneous Localization And Mapping (SLAM) and costly Inertial Measurement Units (IMUs). Meanwhile, the [...] Read more.
Over the last decade, increasing demands for building interior mapping have brought the challenge of effectively and efficiently acquiring geometric information. Most mobile mapping methods rely on the integration of Simultaneous Localization And Mapping (SLAM) and costly Inertial Measurement Units (IMUs). Meanwhile, the methods also suffer misalignment errors caused by the low-resolution inhomogeneous point clouds captured using multi-line Mobile Laser Scanners (MLSs). While point-based alignments between such point clouds are affected by the highly dynamic moving patterns of the platform, plane-based methods are limited by the poor quality of the planes extracted, which reduce the methods’ robustness, reliability, and applicability. To alleviate these issues, we proposed and developed a method for plane extraction from low-resolution inhomogeneous point clouds. Based on the definition of virtual scanlines and the Enhanced Line Simplification (ELS) algorithm, the method extracts feature points, generates line segments, forms patches, and merges multi-direction fractions to form planes. The proposed method reduces the over-segmentation fractions caused by measurement noise and scanline curvature. A dedicated plane-to-plane point cloud alignment workflow based on the proposed plane extraction method was created to demonstrate the method’s application. The implementation of the coarse-to-fine procedure and the shortest-path initialization strategy eliminates the necessity of IMUs in mobile mapping. A mobile mapping prototype was designed to test the performance of the proposed methods. The results show that the proposed workflow and hardware system achieves centimeter-level accuracy, which suggests that it can be applied to mobile mapping and sensor fusion. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
An Efficient Encoding Voxel-Based Segmentation (EVBS) Algorithm Based on Fast Adjacent Voxel Search for Point Cloud Plane Segmentation
Remote Sens. 2019, 11(23), 2727; https://doi.org/10.3390/rs11232727 - 20 Nov 2019
Abstract
Plane segmentation is a basic yet important process in light detection and ranging (LiDAR) point cloud processing. The traditional point cloud plane segmentation algorithm is typically affected by the number of point clouds and the noise data, which results in slow segmentation efficiency [...] Read more.
Plane segmentation is a basic yet important process in light detection and ranging (LiDAR) point cloud processing. The traditional point cloud plane segmentation algorithm is typically affected by the number of point clouds and the noise data, which results in slow segmentation efficiency and poor segmentation effect. Hence, an efficient encoding voxel-based segmentation (EVBS) algorithm based on a fast adjacent voxel search is proposed in this study. First, a binary octree algorithm is proposed to construct the voxel as the segmentation object and code the voxel, which can compute voxel features quickly and accurately. Second, a voxel-based region growing algorithm is proposed to cluster the corresponding voxel to perform the initial point cloud segmentation, which can improve the rationality of seed selection. Finally, a refining point method is proposed to solve the problem of under-segmentation in unlabeled voxels by judging the relationship between the points and the segmented plane. Experimental results demonstrate that the proposed algorithm is better than the traditional algorithm in terms of computation time, extraction accuracy, and recall rate. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
RealPoint3D: Generating 3D Point Clouds from a Single Image of Complex Scenarios
Remote Sens. 2019, 11(22), 2644; https://doi.org/10.3390/rs11222644 - 13 Nov 2019
Abstract
Generating 3D point clouds from a single image has attracted full attention from researchers in the field of multimedia, remote sensing and computer vision. With the recent proliferation of deep learning, various deep models have been proposed for the 3D point cloud generation. [...] Read more.
Generating 3D point clouds from a single image has attracted full attention from researchers in the field of multimedia, remote sensing and computer vision. With the recent proliferation of deep learning, various deep models have been proposed for the 3D point cloud generation. However, they require objects to be captured with absolutely clean backgrounds and fixed viewpoints, which highly limits their application in the real environment. To guide 3D point cloud generation, we propose a novel network, RealPoint3D, to integrate prior 3D shape knowledge into the network. Taking additional 3D information, RealPoint3D can handle 3D object generation from a single real image captured from any viewpoint and complex background. Specifically, provided a query image, we retrieve the nearest shape model from a pre-prepared 3D model database. Then, the image, together with the retrieved shape model, is fed into RealPoint3D to generate a fine-grained 3D point cloud. We evaluated the proposed RealPoint3D on the ShapeNet dataset and ObjectNet3D dataset for the 3D point cloud generation. Experimental results and comparisons with state-of-the-art methods demonstrate that our framework achieves superior performance. Furthermore, our proposed framework works well for real images in complex backgrounds (the image has the remaining objects in addition to the reconstructed object, and the reconstructed object may be occluded or truncated) with various viewing angles. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Towards Automatic Segmentation and Recognition of Multiple Precast Concrete Elements in Outdoor Laser Scan Data
Remote Sens. 2019, 11(11), 1383; https://doi.org/10.3390/rs11111383 - 10 Jun 2019
Cited by 1
Abstract
To date, to improve construction quality and efficiency and reduce environmental pollution, the use of precast concrete elements (PCEs) has become popular in civil engineering. As PCEs are manufactured in a batch manner and possess complicated shapes, traditional manual inspection methods cannot meet [...] Read more.
To date, to improve construction quality and efficiency and reduce environmental pollution, the use of precast concrete elements (PCEs) has become popular in civil engineering. As PCEs are manufactured in a batch manner and possess complicated shapes, traditional manual inspection methods cannot meet today’s requirements in terms of production rate of PCEs. The manual inspection of PCEs needs to be conducted one by one after the production, resulting in the excessive storage of finished PCEs in the storage yards. Therefore, many studies have proposed the use of terrestrial laser scanners (TLSs) for the quality inspection of PCEs. However, all these studies focus on the data of a single PCE or a single surface of PCE, which is acquired from a unique or predefined scanning angle. It is thus still inefficient and impractical in reality, where hundred types of PCEs with different properties may exist. Taking this cue, this study proposes to scan multiple PCEs simultaneously to improve the inspection efficiency by using TLSs. In particular, a segmentation and recognition approach is proposed to automatically extract and identify the different types of PCEs in a large amount of outdoor laser scan data. For the data segmentation, 3D data is first converted into 2D images. Image processing is then combined with radially bounded nearest neighbor graph (RBNN) algorithm to speed up the laser scan data segmentation. For the PCE recognition, based on the as-designed models of PCEs in building information modeling (BIM), the proposed method uses a coarse matching and a fine matching to recognize the type of each PCE data. To the best of our knowledge, no research work has been conducted on the automatic recognition of PCEs from a million or even ten million of the outdoor laser scan points, which contain many different types of PCEs. To verify the feasibility of the proposed method, experimental studies have been conducted on the PCE outdoor laser scan data, considering the shape, type, and amount of PCEs. In total, 22 PCEs including 12 different types are involved in this paper. Experiment results confirm the effectiveness and efficiency of the proposed approach for automatic segmentation and recognition of different PCEs. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
A Multi-Primitive-Based Hierarchical Optimal Approach for Semantic Labeling of ALS Point Clouds
Remote Sens. 2019, 11(10), 1243; https://doi.org/10.3390/rs11101243 - 24 May 2019
Abstract
There are normally three main steps to carrying out the labeling of airborne laser scanning (ALS) point clouds. The first step is to use appropriate primitives to represent the scanning scenes, the second is to calculate the discriminative features of each primitive, and [...] Read more.
There are normally three main steps to carrying out the labeling of airborne laser scanning (ALS) point clouds. The first step is to use appropriate primitives to represent the scanning scenes, the second is to calculate the discriminative features of each primitive, and the third is to introduce a classifier to label the point clouds. This paper investigates multiple primitives to effectively represent scenes and exploit their geometric relationships. Relationships are graded according to the properties of related primitives. Then, based on initial labeling results, a novel, hierarchical, and optimal strategy is developed to optimize semantic labeling results. The proposed approach was tested using two sets of representative ALS point clouds, namely the Vaihingen datasets and Hong Kong’s Central District dataset. The results were compared with those generated by other typical methods in previous work. Quantitative assessments for the two experimental datasets showed that the performance of the proposed approach was superior to reference methods in both datasets. The scores for correctness attained over 98% in all cases of the Vaihingen datasets and up to 96% in the Hong Kong dataset. The results reveal that our approach of labeling different classes in terms of ALS point clouds is robust and bears significance for future applications, such as 3D modeling and change detection from point clouds. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Figure 1

Open AccessArticle
Three-Dimensional Reconstruction of Structural Surface Model of Heritage Bridges Using UAV-Based Photogrammetric Point Clouds
Remote Sens. 2019, 11(10), 1204; https://doi.org/10.3390/rs11101204 - 21 May 2019
Cited by 3
Abstract
Three-dimensional (3D) digital technology is essential to the maintenance and monitoring of cultural heritage sites. In the field of bridge engineering, 3D models generated from point clouds of existing bridges is drawing increasing attention. Currently, the widespread use of the unmanned aerial vehicle [...] Read more.
Three-dimensional (3D) digital technology is essential to the maintenance and monitoring of cultural heritage sites. In the field of bridge engineering, 3D models generated from point clouds of existing bridges is drawing increasing attention. Currently, the widespread use of the unmanned aerial vehicle (UAV) provides a practical solution for generating 3D point clouds as well as models, which can drastically reduce the manual effort and cost involved. In this study, we present a semi-automated framework for generating structural surface models of heritage bridges. To be specific, we propose to tackle this challenge via a novel top-down method for segmenting main bridge components, combined with rule-based classification, to produce labeled 3D models from UAV photogrammetric point clouds. The point clouds of the heritage bridge are generated from the captured UAV images through the structure-from-motion workflow. A segmentation method is developed based on the supervoxel structure and global graph optimization, which can effectively separate bridge components based on geometric features. Then, recognition by the use of a classification tree and bridge geometry is utilized to recognize different structural elements from the obtained segments. Finally, surface modeling is conducted to generate surface models of the recognized elements. Experiments using two bridges in China demonstrate the potential of the presented structural model reconstruction method using UAV photogrammetry and point cloud processing in 3D digital documentation of heritage bridges. By using given markers, the reconstruction error of point clouds can be as small as 0.4%. Moreover, the precision and recall of segmentation results using testing date are better than 0.8, and a recognition accuracy better than 0.8 is achieved. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Figure 1

Open AccessArticle
Non-Rigid Vehicle-Borne LiDAR-Assisted Aerotriangulation
Remote Sens. 2019, 11(10), 1188; https://doi.org/10.3390/rs11101188 - 18 May 2019
Abstract
VLS (Vehicle-borne Laser Scanning) can easily scan the road surface in the close range with high density. UAV (Unmanned Aerial Vehicle) can capture a wider range of ground images. Due to the complementary features of platforms of VLS and UAV, combining the two [...] Read more.
VLS (Vehicle-borne Laser Scanning) can easily scan the road surface in the close range with high density. UAV (Unmanned Aerial Vehicle) can capture a wider range of ground images. Due to the complementary features of platforms of VLS and UAV, combining the two methods becomes a more effective method of data acquisition. In this paper, a non-rigid method for the aerotriangulation of UAV images assisted by a vehicle-borne light detection and ranging (LiDAR) point cloud is proposed, which greatly reduces the number of control points and improves the automation. We convert the LiDAR point cloud-assisted aerotriangulation into a registration problem between two point clouds, which does not require complicated feature extraction and match between point cloud and images. Compared with the iterative closest point (ICP) algorithm, this method can address the non-rigid image distortion with a more rigorous adjustment model and a higher accuracy of aerotriangulation. The experimental results show that the constraint of the LiDAR point cloud ensures the high accuracy of the aerotriangulation, even in the absence of control points. The root-mean-square error (RMSE) of the checkpoints on the x, y, and z axes are 0.118 m, 0.163 m, and 0.084m, respectively, which verifies the reliability of the proposed method. As a necessary condition for joint mapping, the research based on VLS and UAV images in uncontrolled circumstances will greatly improve the efficiency of joint mapping and reduce its cost. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Surfaces of Revolution (SORs) Reconstruction Using a Self-Adaptive Generatrix Line Extraction Method from Point Clouds
Remote Sens. 2019, 11(9), 1125; https://doi.org/10.3390/rs11091125 - 10 May 2019
Abstract
This paper presents an automatic reconstruction algorithm of surfaces of revolution (SORs) with a self-adaptive method for generatrix line extraction from point clouds. The proposed method does not need to calculate the normal of point clouds, which can greatly improve the efficiency and [...] Read more.
This paper presents an automatic reconstruction algorithm of surfaces of revolution (SORs) with a self-adaptive method for generatrix line extraction from point clouds. The proposed method does not need to calculate the normal of point clouds, which can greatly improve the efficiency and accuracy of SORs reconstruction. Firstly, the rotation axis of a SOR is automatically extracted by a minimum relative deviation among the three axial directions for both tall-thin and short-wide SORs. Secondly, the projection profile of a SOR is extracted by the triangulated irregular network (TIN) model and random sample consensus (RANSAC) algorithm. Thirdly, the point set of a generatrix line of a SOR is determined by searching for the extremum of coordinate Z, together with overflow points processing, and further determines the type of generatrix line by the smaller RMS errors between linear fitting and quadratic curve fitting. In order to validate the efficiency and accuracy of the proposed method, two kinds of SORs, simple SORs with a straight generatrix line and complex SORs with a curved generatrix line are selected for comparison analysis in the paper. The results demonstrate that the proposed method is robust and can reconstruct SORs with a higher accuracy and efficiency based on the point clouds. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Figure 1

Review

Jump to: Research

Open AccessReview
A Review on Deep Learning Techniques for 3D Sensed Data Classification
Remote Sens. 2019, 11(12), 1499; https://doi.org/10.3390/rs11121499 - 25 Jun 2019
Cited by 6
Abstract
Over the past decade deep learning has driven progress in 2D image understanding. Despite these advancements, techniques for automatic 3D sensed data understanding, such as point clouds, is comparatively immature. However, with a range of important applications from indoor robotics navigation to national [...] Read more.
Over the past decade deep learning has driven progress in 2D image understanding. Despite these advancements, techniques for automatic 3D sensed data understanding, such as point clouds, is comparatively immature. However, with a range of important applications from indoor robotics navigation to national scale remote sensing there is a high demand for algorithms that can learn to automatically understand and classify 3D sensed data. In this paper we review the current state-of-the-art deep learning architectures for processing unstructured Euclidean data. We begin by addressing the background concepts and traditional methodologies. We review the current main approaches, including RGB-D, multi-view, volumetric and fully end-to-end architecture designs. Datasets for each category are documented and explained. Finally, we give a detailed discussion about the future of deep learning for 3D sensed data, using literature to justify the areas where future research would be most valuable. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Figure 1

Back to TopTop