Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (160)

Search Parameters:
Keywords = indoor point cloud data

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 5526 KiB  
Article
Implementation of Integrated Smart Construction Monitoring System Based on Point Cloud Data and IoT Technique
by Ju-Yong Kim, Suhyun Kang, Jungmin Cho, Seungjin Jeong, Sanghee Kim, Youngje Sung, Byoungkil Lee and Gwang-Hee Kim
Sensors 2025, 25(13), 3997; https://doi.org/10.3390/s25133997 - 26 Jun 2025
Viewed by 533
Abstract
This study presents an integrated smart construction monitoring system that combines point cloud data (PCD) from a 3D laser scanner with real-time IoT sensors and ultra-wideband (UWB) indoor positioning technology to enhance construction site safety and quality management. The system addresses the limitations [...] Read more.
This study presents an integrated smart construction monitoring system that combines point cloud data (PCD) from a 3D laser scanner with real-time IoT sensors and ultra-wideband (UWB) indoor positioning technology to enhance construction site safety and quality management. The system addresses the limitations of traditional BIM-based methods by leveraging high-precision PCD that accurately reflects actual site conditions. Field validation was conducted over 17 days at a residential construction site, focusing on two floors during concrete pouring. The concrete strength prediction model, based on the ASTM C1074 maturity method, achieved prediction accuracy within 1–2 MPa of measured values (e.g., predicted: 26.2 MPa vs. actual: 25.3 MPa at 14 days). The UWB-based worker localization system demonstrated a maximum positioning error of 1.44 m with 1 s update intervals, enabling real-time tracking of worker movements. Static accuracy tests showed localization errors of 0.80–0.94 m under clear line-of-sight and 1.14–1.26 m under partial non-line-of-sight. The integrated platform successfully combined PCD visualization with real-time sensor data, allowing construction managers to monitor concrete curing progress and worker safety simultaneously. Full article
Show Figures

Figure 1

21 pages, 4424 KiB  
Article
Non-Contact Fall Detection System Using 4D Imaging Radar for Elderly Safety Based on a CNN Model
by Sejong Ahn, Museong Choi, Jongjin Lee, Jinseok Kim and Sungtaek Chung
Sensors 2025, 25(11), 3452; https://doi.org/10.3390/s25113452 - 30 May 2025
Viewed by 764
Abstract
Progressive global aging has increased the number of elderly individuals living alone. The consequent rise in fall accidents has worsened physical injuries, reduced the quality of life, and increased medical expenses. Existing wearable fall-detection devices may cause discomfort, and camera-based systems raise privacy [...] Read more.
Progressive global aging has increased the number of elderly individuals living alone. The consequent rise in fall accidents has worsened physical injuries, reduced the quality of life, and increased medical expenses. Existing wearable fall-detection devices may cause discomfort, and camera-based systems raise privacy concerns. Here, we propose a non-contact fall-detection system that integrates 4D imaging radar sensors with artificial intelligence (AI) technology to detect falls through real-time monitoring and visualization using a web-based dashboard and Unity engine-based avatar, along with immediate alerts. The system eliminates the need for uncomfortable wearable devices and mitigates the privacy issues associated with cameras. The radar sensors generate Point Cloud data (the spatial coordinates, velocity, Doppler power, and time), which allow analysis of the body position and movement. A CNN model classifies postures into standing, sitting, and lying, while changes in the speed and position distinguish falling actions from lying-down actions. The Point Cloud data were normalized and organized using zero padding and k-means clustering to improve the learning efficiency. The model achieved 98.66% accuracy in posture classification and 95% in fall detection. This study demonstrates the effectiveness of the proposed fall detection approach and suggests future directions in multi-sensor integration for indoor applications. Full article
(This article belongs to the Special Issue Advanced Sensors for Health Monitoring in Older Adults)
Show Figures

Figure 1

19 pages, 18677 KiB  
Article
Generation of Structural Components for Indoor Spaces from Point Clouds
by Junhyuk Lee, Yutaka Ohtake, Takashi Nakano and Daisuke Sato
Sensors 2025, 25(10), 3012; https://doi.org/10.3390/s25103012 - 10 May 2025
Viewed by 438
Abstract
Point clouds from laser scanners have been widely used in recent research on indoor modeling methods. Currently, particularly in data-driven modeling methods, data preprocessing for dividing structural components and nonstructural components is required before modeling. In this paper, we propose an indoor modeling [...] Read more.
Point clouds from laser scanners have been widely used in recent research on indoor modeling methods. Currently, particularly in data-driven modeling methods, data preprocessing for dividing structural components and nonstructural components is required before modeling. In this paper, we propose an indoor modeling method without the classification of structural and nonstructural components. A pre-mesh is generated for constructing the adjacency relations of point clouds, and plane components are extracted using planar-based region growing. Then, the distance fields of each plane are calculated, and voxel data referred to as a surface confidence map are obtained. Subsequently, the inside and outside of the indoor model are classified using a graph-cut algorithm. Finally, indoor models with watertight meshes are generated via dual contouring and mesh refinement. The experimental results showed that the point-to-mesh error ranged from approximately 2 mm to 50 mm depending on the dataset. Furthermore, completeness—measured as the proportion of original point-cloud data successfully reconstructed into the mesh—approached 1.0 for single-room datasets and reached around 0.95 for certain multiroom and synthetic datasets. These results demonstrate the effectiveness of the proposed method in automatically removing non-structural components and generating clean structural meshes. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

20 pages, 9870 KiB  
Article
Analysis, Simulation, and Scanning Geometry Calibration of Palmer Scanning Units for Airborne Hyperspectral Light Detection and Ranging
by Shuo Shi, Qian Xu, Chengyu Gong, Wei Gong, Xingtao Tang and Bowei Zhou
Remote Sens. 2025, 17(8), 1450; https://doi.org/10.3390/rs17081450 - 18 Apr 2025
Viewed by 388
Abstract
Airborne hyperspectral LiDAR (AHSL) is a technology that integrates the spectral content collected using hyperspectral imaging and the precise 3D descriptions of observed objects obtained using LiDAR (light detection and ranging). AHSL detects the spectral and three-dimensional (3D) information on an object simply [...] Read more.
Airborne hyperspectral LiDAR (AHSL) is a technology that integrates the spectral content collected using hyperspectral imaging and the precise 3D descriptions of observed objects obtained using LiDAR (light detection and ranging). AHSL detects the spectral and three-dimensional (3D) information on an object simply using laser measurements. Nevertheless, the advantageous richness of spectral properties also introduces novel issues into the scan unit, the mechanical–optical trade-off. Specifically, the abundant spectral information requires a larger optical aperture, limiting the acceptance of the mechanic load by the scan unit at a demanding rotation speed and flight height. Via the simulation and analysis of scan models, it is exhibited that Palmer scans fit the large optical aperture required by AHSL best. Furthermore, based on the simulation of the Palmer scan model, 45.23% is explored as the optimized ratio of overlap (ROP) for minimizing the diversity of the point density, with a reduction in the coefficient of variation (CV) from 0.47 to 0.19. The other issue is that it is intricate to calibrate the scanning geometry using outside devices due to the complex optical path. A self-calibration strategy is proposed for tackling this problem, which integrates indoor laser vector retrieval and airborne orientation correction. The strategy is composed of the following three improvements: (1) A self-determined laser vector retrieval strategy that utilizes the self-ranging feature of AHSL itself is proposed for retrieving the initial scanning laser vectors with a precision of 0.874 mrad. (2) A linear residual estimated interpolation method (LREI) is proposed for enhancing the precision of the interpolation, reducing the RMSE from 1.517 mrad to 0.977 mrad. Compared to the linear interpolation method, LREI maintains the geometric features of Palmer scanning traces. (3) A least-deviated flatness restricted optimization (LDFO) algorithm is used to calibrate the angle offset in aerial scanning point cloud data, which reduces the standard deviation in the flatness of the scanning plane from 1.389 m to 0.241 m and reduces the distortion of the scanning strip. This study provides a practical scanning method and a corresponding calibration strategy for AHSL. Full article
Show Figures

Figure 1

22 pages, 11693 KiB  
Article
Development of Navigation Network Models for Indoor Path Planning Using 3D Semantic Point Clouds
by Jiwei Hou, Patrick Hübner and Dorota Iwaszczuk
Appl. Sci. 2025, 15(3), 1151; https://doi.org/10.3390/app15031151 - 23 Jan 2025
Cited by 1 | Viewed by 1190
Abstract
Accurate and efficient path planning in indoor environments relies on high-quality navigation networks that faithfully represent the spatial and semantic structure of the environment. Three-dimensional semantic point clouds provide valuable spatial and semantic information for navigation tasks. However, extracting detailed navigation networks from [...] Read more.
Accurate and efficient path planning in indoor environments relies on high-quality navigation networks that faithfully represent the spatial and semantic structure of the environment. Three-dimensional semantic point clouds provide valuable spatial and semantic information for navigation tasks. However, extracting detailed navigation networks from 3D semantic point clouds remains a challenge, especially in complex indoor spaces like staircases and multi-floor environments. This study presents a comprehensive framework for developing and extracting robust navigation network models, specifically designed for indoor path planning applications. The main contributions include (1) a preprocessing pipeline that ensures high accuracy and consistency of the input semantic point cloud data; (2) a moving window algorithm for refined node extraction in staircases to enable seamless navigation across vertical spaces; and (3) a lightweight, JSON-based storage structure for efficient network representation and integration. Additionally, we presented a more comprehensive sub-node extraction method for hallways to enhance network continuity. We validated the method using two datasets—the public S3DIS dataset and the self-collected HoloLens 2 dataset—and demonstrated its effectiveness through Dijkstra-based path planning. The generated navigation networks supported practical scenarios such as wheelchair-accessible path planning and seamless multi-floor navigation. These findings highlight the practical value of our approach for modern indoor navigation systems, with potential applications in smart building management, robotics, and emergency response. Full article
(This article belongs to the Special Issue Current Research in Indoor Positioning and Localization)
Show Figures

Figure 1

23 pages, 12001 KiB  
Article
Enhancing Off-Road Topography Estimation by Fusing LIDAR and Stereo Camera Data with Interpolated Ground Plane
by Gustav Sten, Lei Feng and Björn Möller
Sensors 2025, 25(2), 509; https://doi.org/10.3390/s25020509 - 16 Jan 2025
Viewed by 1060
Abstract
Topography estimation is essential for autonomous off-road navigation. Common methods rely on point cloud data from, e.g., Light Detection and Ranging sensors (LIDARs) and stereo cameras. Stereo cameras produce dense point clouds with larger coverage but lower accuracy. LIDARs, on the other hand, [...] Read more.
Topography estimation is essential for autonomous off-road navigation. Common methods rely on point cloud data from, e.g., Light Detection and Ranging sensors (LIDARs) and stereo cameras. Stereo cameras produce dense point clouds with larger coverage but lower accuracy. LIDARs, on the other hand, have higher accuracy and longer range but much less coverage. LIDARs are also more expensive. The research question examines whether incorporating LIDARs can significantly improve stereo camera accuracy. Current sensor fusion methods use LIDARs’ raw measurements directly; thus, the improvement in estimation accuracy is limited to only LIDAR-scanned locations The main contribution of our new method is to construct a reference ground plane through the interpolation of LIDAR data so that the interpolated maps have similar coverage as the stereo camera’s point cloud. The interpolated maps are fused with the stereo camera point cloud via Kalman filters to improve a larger section of the topography map. The method is tested in three environments: controlled indoor, semi-controlled outdoor, and unstructured terrain. Compared to the existing method without LIDAR interpolation, the proposed approach reduces average error by 40% in the controlled environment and 67% in the semi-controlled environment, while maintaining large coverage. The unstructured environment evaluation confirms its corrective impact. Full article
Show Figures

Figure 1

16 pages, 18891 KiB  
Article
Research on the Classification of Traditional Building Materials in Southern Fujian Using the Reflection Intensity Values of Ground-Based LiDAR
by Tsung-Chiang Wu, Neng-Gang Kuan and Wei-Cheng Lu
Sensors 2025, 25(2), 461; https://doi.org/10.3390/s25020461 - 15 Jan 2025
Viewed by 726
Abstract
Ground-based LiDAR technology has been widely applied in various fields for acquiring 3D point cloud data, including spatial coordinates, digital color information, and laser reflectance intensities (I-values). These datasets preserve the digital information of scanned objects, supporting value-added applications. However, raw point cloud [...] Read more.
Ground-based LiDAR technology has been widely applied in various fields for acquiring 3D point cloud data, including spatial coordinates, digital color information, and laser reflectance intensities (I-values). These datasets preserve the digital information of scanned objects, supporting value-added applications. However, raw point cloud data visually represent spatial features but lack attribute information, posing challenges for automated object classification and effective management. Commercial software primarily relies on manual classification, which is time-intensive. This study addresses these challenges by using the laser reflectance intensity (I-value) for automated classification. Boxplot theory is applied to calibrate the data, remove noise, and establish polynomial regression equations correlating intensity with scanning distances. These equations serve as attribute functions for classifying datasets. Focusing on materials in traditional Minnan architecture on Kinmen Island, controlled indoor experiments and outdoor case studies validate the approach. The results show classification accuracies of 74% for wood, 98% for stone, and 93% for brick, demonstrating this method’s effectiveness in enhancing point cloud data applications and management. Full article
(This article belongs to the Special Issue Application of LiDAR Remote Sensing and Mapping)
Show Figures

Figure 1

20 pages, 12697 KiB  
Article
Semi-Automated Building Dataset Creation for 3D Semantic Segmentation of Point Clouds
by Hyeongjun Yoo, Yeonggwang Kim, Je-Ho Ryu, Seungjoo Lee and Jong Hun Lee
Electronics 2025, 14(1), 108; https://doi.org/10.3390/electronics14010108 - 30 Dec 2024
Viewed by 1418
Abstract
When 2D drawings are unavailable or significantly differ from the actual site, scan-to-BIM (Building Information Modeling) technology is employed to generate 3D models from point cloud data. This process is predominantly manual, but ongoing research aims to automate it. However, compared to 2D [...] Read more.
When 2D drawings are unavailable or significantly differ from the actual site, scan-to-BIM (Building Information Modeling) technology is employed to generate 3D models from point cloud data. This process is predominantly manual, but ongoing research aims to automate it. However, compared to 2D image data, 3D point clouds face a persistent shortage of data, limiting the ability of deep learning models to learn diverse data characteristics and reducing their generalization performance. To address data scarcity, this paper proposes a semi-automated framework for generating datasets for semantic segmentation using 3D point clouds and Building Information Modeling (BIM) models. The framework includes a preprocessing method to spatially segment entire building datasets and applies boundary representations of BIM objects to detect intersections with point cloud data, enabling automated labeling. Using this framework, data from five buildings were processed to create 10 areas. Additionally, six datasets were constructed by combining Stanford 3D Indoor Scene Dataset (S3DIS) data with the newly generated data, and both quantitative and qualitative evaluations were conducted on various areas. Models trained on datasets incorporating diverse domains consistently achieved the highest performance across most areas, demonstrating that diverse domain data significantly enhance model generalization. The proposed framework facilitates the generation of high-quality 3D point cloud datasets from various domains, supporting the improvement of deep learning model generalization. Full article
(This article belongs to the Special Issue Point Cloud Data Processing and Applications)
Show Figures

Figure 1

16 pages, 5533 KiB  
Article
EGNet: 3D Semantic Segmentation Through Point–Voxel–Mesh Data for Euclidean–Geodesic Feature Fusion
by Qi Li, Yu Song, Xiaoqian Jin, Yan Wu, Hang Zhang and Di Zhao
Sensors 2024, 24(24), 8196; https://doi.org/10.3390/s24248196 - 22 Dec 2024
Viewed by 803
Abstract
With the advancement of service robot technology, the demand for higher boundary precision in indoor semantic segmentation has increased. Traditional methods of extracting Euclidean features using point cloud and voxel data often neglect geodesic information, reducing boundary accuracy for adjacent objects and consuming [...] Read more.
With the advancement of service robot technology, the demand for higher boundary precision in indoor semantic segmentation has increased. Traditional methods of extracting Euclidean features using point cloud and voxel data often neglect geodesic information, reducing boundary accuracy for adjacent objects and consuming significant computational resources. This study proposes a novel network, the Euclidean–geodesic network (EGNet), which uses point cloud–voxel–mesh data to characterize detail, contour, and geodesic features, respectively. The EGNet performs feature fusion through Euclidean and geodesic branches. In the Euclidean branch, the features extracted from point cloud data compensate for the detail features lost by voxel data. In the geodesic branch, geodesic features from mesh data are extracted using inter-domain fusion and aggregation modules. These geodesic features are then combined with contextual features from the Euclidean branch, and the simplified trajectory map of the grid is used for up-sampling to produce the final semantic segmentation results. The Scannet and Matterport datasets were used to demonstrate the effectiveness of the EGNet through visual comparisons with other models. The results demonstrate the effectiveness of integrating Euclidean and geodesic features for improved semantic segmentation. This approach can inspire further research combining these feature types for enhanced segmentation accuracy. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

26 pages, 28365 KiB  
Article
Three-Dimensional Geometric-Physical Modeling of an Environment with an In-House-Developed Multi-Sensor Robotic System
by Su Zhang, Minglang Yu, Haoyu Chen, Minchao Zhang, Kai Tan, Xufeng Chen, Haipeng Wang and Feng Xu
Remote Sens. 2024, 16(20), 3897; https://doi.org/10.3390/rs16203897 - 20 Oct 2024
Cited by 1 | Viewed by 1468
Abstract
Environment 3D modeling is critical for the development of future intelligent unmanned systems. This paper proposes a multi-sensor robotic system for environmental geometric-physical modeling and the corresponding data processing methods. The system is primarily equipped with a millimeter-wave cascaded radar and a multispectral [...] Read more.
Environment 3D modeling is critical for the development of future intelligent unmanned systems. This paper proposes a multi-sensor robotic system for environmental geometric-physical modeling and the corresponding data processing methods. The system is primarily equipped with a millimeter-wave cascaded radar and a multispectral camera to acquire the electromagnetic characteristics and material categories of the target environment and simultaneously employs light detection and ranging (LiDAR) and an optical camera to achieve a three-dimensional spatial reconstruction of the environment. Specifically, the millimeter-wave radar sensor adopts a multiple input multiple output (MIMO) array and obtains 3D synthetic aperture radar images through 1D mechanical scanning perpendicular to the array, thereby capturing the electromagnetic properties of the environment. The multispectral camera, equipped with nine channels, provides rich spectral information for material identification and clustering. Additionally, LiDAR is used to obtain a 3D point cloud, combined with the RGB images captured by the optical camera, enabling the construction of a three-dimensional geometric model. By fusing the data from four sensors, a comprehensive geometric-physical model of the environment can be constructed. Experiments conducted in indoor environments demonstrated excellent spatial-geometric-physical reconstruction results. This system can play an important role in various applications, such as environment modeling and planning. Full article
Show Figures

Graphical abstract

23 pages, 97967 KiB  
Article
Calibration Methods for Large-Scale and High-Precision Globalization of Local Point Cloud Data Based on iGPS
by Rui Han, Thomas Dunker, Erik Trostmann and Zhigang Xu
Sensors 2024, 24(18), 6114; https://doi.org/10.3390/s24186114 - 21 Sep 2024
Viewed by 1505
Abstract
The point cloud is one of the measurement results of local measurement and is widely used because of its high measurement accuracy, high data density, and low environmental impact. However, since point cloud data from a single measurement are generally small in spatial [...] Read more.
The point cloud is one of the measurement results of local measurement and is widely used because of its high measurement accuracy, high data density, and low environmental impact. However, since point cloud data from a single measurement are generally small in spatial extent, it is necessary to accurately globalize the local point cloud to measure large components. In this paper, the method of using an iGPS (indoor Global Positioning System) as an external measurement device to realize high-accuracy globalization of local point cloud data is proposed. Two calibration models are also discussed for different application scenarios. Verification experiments prove that the average calibration errors of these two calibration models are 0.12 mm and 0.17 mm, respectively. The proposed method can maintain calibration precision in a large spatial range (about 10 m × 10 m × 5 m), which is of high value for engineering applications. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

19 pages, 20386 KiB  
Article
YOD-SLAM: An Indoor Dynamic VSLAM Algorithm Based on the YOLOv8 Model and Depth Information
by Yiming Li, Yize Wang, Liuwei Lu and Qi An
Electronics 2024, 13(18), 3633; https://doi.org/10.3390/electronics13183633 - 12 Sep 2024
Cited by 3 | Viewed by 1850
Abstract
Aiming at the problems of low positioning accuracy and poor mapping effect of the visual SLAM system caused by the poor quality of the dynamic object mask in an indoor dynamic environment, an indoor dynamic VSLAM algorithm based on the YOLOv8 model and [...] Read more.
Aiming at the problems of low positioning accuracy and poor mapping effect of the visual SLAM system caused by the poor quality of the dynamic object mask in an indoor dynamic environment, an indoor dynamic VSLAM algorithm based on the YOLOv8 model and depth information (YOD-SLAM) is proposed based on the ORB-SLAM3 system. Firstly, the YOLOv8 model obtains the original mask of a priori dynamic objects, and the depth information is used to modify the mask. Secondly, the mask’s depth information and center point are used to a priori determine if the dynamic object has missed detection and if the mask needs to be redrawn. Then, the mask edge distance and depth information are used to judge the movement state of non-prior dynamic objects. Finally, all dynamic object information is removed, and the remaining static objects are used for posing estimation and dense point cloud mapping. The accuracy of camera positioning and the construction effect of dense point cloud maps are verified using the TUM RGB-D dataset and real environment data. The results show that YOD-SLAM has a higher positioning accuracy and dense point cloud mapping effect in dynamic scenes than other advanced SLAM systems such as DS-SLAM and DynaSLAM. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

21 pages, 2923 KiB  
Article
Multi-Scale Classification and Contrastive Regularization: Weakly Supervised Large-Scale 3D Point Cloud Semantic Segmentation
by Jingyi Wang, Jingyang He, Yu Liu, Chen Chen, Maojun Zhang and Hanlin Tan
Remote Sens. 2024, 16(17), 3319; https://doi.org/10.3390/rs16173319 - 7 Sep 2024
Viewed by 1680
Abstract
With the proliferation of large-scale 3D point cloud datasets, the high cost of per-point annotation has spurred the development of weakly supervised semantic segmentation methods. Current popular research mainly focuses on single-scale classification, which fails to address the significant feature scale differences between [...] Read more.
With the proliferation of large-scale 3D point cloud datasets, the high cost of per-point annotation has spurred the development of weakly supervised semantic segmentation methods. Current popular research mainly focuses on single-scale classification, which fails to address the significant feature scale differences between background and objects in large scenes. Therefore, we propose MCCR (Multi-scale Classification and Contrastive Regularization), an end-to-end semantic segmentation framework for large-scale 3D scenes under weak supervision. MCCR first aggregates features and applies random downsampling to the input data. Then, it captures the local features of a random point based on multi-layer features and the input coordinates. These features are then fed into the network to obtain the initial and final prediction results, and MCCR iteratively trains the model using strategies such as contrastive learning. Notably, MCCR combines multi-scale classification with contrastive regularization to fully exploit multi-scale features and weakly labeled information. We investigate both point-level and local contrastive regularization to leverage point cloud augmentor and local semantic information and introduce a Decoupling Layer to guide the loss optimization in different spaces. Results on three popular large-scale datasets, S3DIS, SemanticKITTI and SensatUrban, demonstrate that our model achieves state-of-the-art (SOTA) performance on large-scale outdoor datasets with only 0.1% labeled points for supervision, while maintaining strong performance on indoor datasets. Full article
Show Figures

Figure 1

28 pages, 4219 KiB  
Review
Delving into the Potential of Deep Learning Algorithms for Point Cloud Segmentation at Organ Level in Plant Phenotyping
by Kai Xie, Jianzhong Zhu, He Ren, Yinghua Wang, Wanneng Yang, Gang Chen, Chengda Lin and Ruifang Zhai
Remote Sens. 2024, 16(17), 3290; https://doi.org/10.3390/rs16173290 - 4 Sep 2024
Cited by 7 | Viewed by 3625
Abstract
Three-dimensional point clouds, as an advanced imaging technique, enable researchers to capture plant traits more precisely and comprehensively. The task of plant segmentation is crucial in plant phenotyping, yet current methods face limitations in computational cost, accuracy, and high-throughput capabilities. Consequently, many researchers [...] Read more.
Three-dimensional point clouds, as an advanced imaging technique, enable researchers to capture plant traits more precisely and comprehensively. The task of plant segmentation is crucial in plant phenotyping, yet current methods face limitations in computational cost, accuracy, and high-throughput capabilities. Consequently, many researchers have adopted 3D point cloud technology for organ-level segmentation, extending beyond manual and 2D visual measurement methods. However, analyzing plant phenotypic traits using 3D point cloud technology is influenced by various factors such as data acquisition environment, sensors, research subjects, and model selection. Although the existing literature has summarized the application of this technology in plant phenotyping, there has been a lack of in-depth comparison and analysis at the algorithm model level. This paper evaluates the segmentation performance of various deep learning models on point clouds collected or generated under different scenarios. These methods include outdoor real planting scenarios and indoor controlled environments, employing both active and passive acquisition methods. Nine classical point cloud segmentation models were comprehensively evaluated: PointNet, PointNet++, PointMLP, DGCNN, PointCNN, PAConv, CurveNet, Point Transformer (PT), and Stratified Transformer (ST). The results indicate that ST achieved optimal performance across almost all environments and sensors, albeit at a significant computational cost. The transformer architecture for points has demonstrated considerable advantages over traditional feature extractors by accommodating features over longer ranges. Additionally, PAConv constructs weight matrices in a data-driven manner, enabling better adaptation to various scales of plant organs. Finally, a thorough analysis and discussion of the models were conducted from multiple perspectives, including model construction, data collection environments, and platforms. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

19 pages, 3353 KiB  
Article
Assessment of NavVis VLX and BLK2GO SLAM Scanner Accuracy for Outdoor and Indoor Surveying Tasks
by Zahra Gharineiat, Fayez Tarsha Kurdi, Krish Henny, Hamish Gray, Aaron Jamieson and Nicholas Reeves
Remote Sens. 2024, 16(17), 3256; https://doi.org/10.3390/rs16173256 - 2 Sep 2024
Cited by 3 | Viewed by 3354
Abstract
The Simultaneous Localization and Mapping (SLAM) scanner is an easy and portable Light Detection and Ranging (LiDAR) data acquisition device. Its main output is a 3D point cloud covering the scanned scene. Regarding the importance of accuracy in the survey domain, this paper [...] Read more.
The Simultaneous Localization and Mapping (SLAM) scanner is an easy and portable Light Detection and Ranging (LiDAR) data acquisition device. Its main output is a 3D point cloud covering the scanned scene. Regarding the importance of accuracy in the survey domain, this paper aims to assess the accuracy of two SLAM scanners: the NavVis VLX and the BLK2GO scanner. This assessment is conducted for both outdoor and indoor environments. In this context, two types of reference data were used: the total station (TS) and the static scanner Z+F Imager 5016. To carry out the assessment, four comparisons were tested: cloud-to-cloud, cloud-to-mesh, mesh-to-mesh, and edge detection board assessment. However, the results of the assessments confirmed that the accuracy of indoor SLAM scanner measurements (5 mm) was greater than that of outdoor ones (between 10 mm and 60 mm). Moreover, the comparison of cloud-to-cloud provided the best accuracy regarding direct accuracy measurement without manipulations. Finally, based on the high accuracy, scanning speed, flexibility, and the accuracy differences between tested cases, it was confirmed that SLAM scanners are effective tools for data acquisition. Full article
Show Figures

Figure 1

Back to TopTop