Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,088)

Search Parameters:
Keywords = cloud-point extraction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1174 KB  
Article
A Reproducible Methodology for 3D Tree-Structure Mensuration and Risk-Oriented Decision Support: Integrating SfM–MVS, Field Referencing, and Rule-Based TRAQ/ALARP Logic
by Elias Milios and Kyriaki Kitikidou
Forests 2026, 17(4), 431; https://doi.org/10.3390/f17040431 (registering DOI) - 28 Mar 2026
Abstract
This manuscript presents a transferable and reproducible methodology for quantitative 3D tree-structure mensuration and transparent, rule-based decision support for tree risk management. The workflow integrates (i) Structure-from-Motion/Multi-View Stereo (SfM–MVS) reconstruction from multi-view imagery, (ii) independent referencing to ensure metric scaling and a consistent [...] Read more.
This manuscript presents a transferable and reproducible methodology for quantitative 3D tree-structure mensuration and transparent, rule-based decision support for tree risk management. The workflow integrates (i) Structure-from-Motion/Multi-View Stereo (SfM–MVS) reconstruction from multi-view imagery, (ii) independent referencing to ensure metric scaling and a consistent local frame, and (iii) point cloud analytics to derive branch-level geometric descriptors (e.g., base diameter, length, inclination, slenderness, and projected reach). A clear rule-based layer operationalizes Tree Risk Assessment Qualification (TRAQ)-style risk components and As Low As Reasonably Practicable (ALARP) principles to map geometry and exposure into auditable management recommendations (e.g., monitoring intervals, pruning/weight reduction, supplemental support, and exclusion-zone planning). To provide a real-data example, the demonstration uses the public Fuji-SfM apple orchard dataset, including three neighboring trees with partially overlapping crowns for tree instance extraction and subsequent TRAQ/ALARP scenarios on an outer tree. The proposed decision layer is intentionally based on external geometry and exposure; internal decay indicators and species-specific mechanical properties (e.g., Modulus of Elasticity (MOE), Modulus of Rupture (MOR)) are outside this demonstration and should be incorporated via complementary diagnostics in operational deployments. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
23 pages, 7893 KB  
Article
Long-Tail Learning for Three-Dimensional Pavement Distress Segmentation Using Point Clouds Reconstructed from a Consumer Camera
by Pengjian Cheng, Junyan Yi, Zhongshi Pei, Zengxin Liu, Dayong Jiang and Abduhaibir Abdukadir
Remote Sens. 2026, 18(7), 1008; https://doi.org/10.3390/rs18071008 - 27 Mar 2026
Abstract
The application of 3D data in pavement inspection represents an emerging trend. Acquiring and measuring the 3D information of pavement distress enables a more comprehensive assessment of severity, thereby allowing for accurate monitoring and evaluation of the pavement’s technical condition. Existing methods face [...] Read more.
The application of 3D data in pavement inspection represents an emerging trend. Acquiring and measuring the 3D information of pavement distress enables a more comprehensive assessment of severity, thereby allowing for accurate monitoring and evaluation of the pavement’s technical condition. Existing methods face challenges in high-cost pavement scanning and insufficient research on automated 3D distress segmentation. This study employed a consumer-grade action camera for data acquisition and constructed an engineering-aligned 3D point cloud dataset of pavements. Then a long-tail class imbalance mitigation strategy was introduced, integrating adaptive re-sampling with a weighted fusion loss function, effectively balancing minority class representation. The proposed network, named PointPaveSeg, was a dedicated point cloud processing architecture. A dual-stream feature fusion module was designed for the encoder layer, which decoupled geometric and semantic features to improve distress extraction capability. The network incorporated a hierarchical feature propagation structure enhanced by edge reinforcement, global interaction, and residual connections. Experimental results demonstrated that PointPaveSeg achieved an mIoU of 78.45% and an accuracy of 95.43%. In the field evaluation, post-processing and geometric information extraction were performed on the segmented point clouds. The results showed high consistency with manual measurements. Testing confirmed the method’s practical applicability in real-world projects, offering a new lightweight alternative for intelligent pavement monitoring and maintenance systems. Full article
(This article belongs to the Special Issue Point Cloud Data Analysis and Applications)
Show Figures

Figure 1

31 pages, 6307 KB  
Article
A Novel Urban Biological Parameter Estimation Method Based on LiDAR Point Cloud Single-Tree Segmentation
by Tongtong Lu, Fang Huang, Yuxin Ding, Qingzhe Lv, Hao Guan, Gongwei Li, Xiang Kang and Geer Teng
Remote Sens. 2026, 18(7), 1001; https://doi.org/10.3390/rs18071001 - 27 Mar 2026
Abstract
Aiming at diverse urban tree structures and difficulties in vegetation point cloud extraction and utilization, this study proposed single-tree-scale biological parameter estimation methods for urban scenarios to enhance point cloud’s application value in urban greening management. For single-tree segmentation, it constructed a method [...] Read more.
Aiming at diverse urban tree structures and difficulties in vegetation point cloud extraction and utilization, this study proposed single-tree-scale biological parameter estimation methods for urban scenarios to enhance point cloud’s application value in urban greening management. For single-tree segmentation, it constructed a method based on the constraints of the trees’ geometric features and combined the gravitational modeling characteristics, called the CGF-CG single-tree segmentation method. This method (i) combines clustering and principal direction analysis to extract trunk points, (ii) introduces canopy segmentation based on trunk positions, (iii) optimizes edge point attributes via a gravitational model. Based on CGF-CG’s accurate results, an improved random forest method for single-tree biological parameter (IRF-BP) estimation (aboveground biomass, carbon storage, leaf area index, living vegetation volume) was proposed: (i) correlation analysis with variable screening, (ii) adaptive feature selection and pigeon-inspired optimization to enhance model generalization, (iii) adopting Shapley Additive Explanations (SHAP) to improve interpretability. Based on these, a complete model for different tree species was constructed. Validation showed that CGF-CG exhibited negligible over-segmentation and under-segmentation in the selected study areas, with overall average precision, recall, and F1-score over 98.5%. Additionally, on the selected overall region, the overall mF1 score, mPTP, and mPTR of our method are 99.13%, 99.15%, and 99.12%, respectively, which are superior to Forestmetrics, lidR, PyCrown, and DBSCAN methods. IRF-BP performed well, with a highest R2 of 0.81 and a lowest mean absolute percentage error of 7.5%, effectively surpassing the performance of traditional models such as RFR, GBR, KNN, and XGB. In summary, results provided theoretical and technical support for urban green resource management and evaluation. Full article
Show Figures

Figure 1

19 pages, 6028 KB  
Article
Multi-View Point Cloud Registration Method for Automated Disassembly of Container Twist Locks
by Chao Mi, Teng Wang, Xintai Man, Mengjie He, Zhiwei Zhang and Yang Shen
J. Mar. Sci. Eng. 2026, 14(7), 605; https://doi.org/10.3390/jmse14070605 - 25 Mar 2026
Viewed by 159
Abstract
With the continuous expansion of maritime trade scale, ports have put forward increasingly higher requirements for transshipment efficiency. Container twist lock disassembly is a key link in the loading and unloading process, and its automation level has a significant impact on the ship’s [...] Read more.
With the continuous expansion of maritime trade scale, ports have put forward increasingly higher requirements for transshipment efficiency. Container twist lock disassembly is a key link in the loading and unloading process, and its automation level has a significant impact on the ship’s berthing time at the port. Aiming at the demand of automated disassembly for high-precision 3D vision, this paper proposes a multi-view point cloud local registration method for twist lock recognition. First, Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) is used to extract the keyhole region with the highest overlap in multi-view point clouds, reducing the interference from non-overlapping structures. Then, a two-stage strategy of “coarse registration + fine registration” is adopted: initial alignment is achieved through Random Sample Consensus (RANSAC), and the Iterative Closest Point (ICP) algorithm is improved by combining adaptive distance threshold and normal consistency constraint to complete fine registration. Experimental results show that the proposed method outperforms the global registration scheme in both accuracy and efficiency: the Root Mean Square Error (RMSE) is reduced to 2.15 mm, the Relative Mean Distance (RMD) is reduced to 1.81 mm, and the registration time is approximately 2.41 s. Compared with global registration, the efficiency is improved by 44.2%, which can meet the real-time requirements of continuous operation at automated terminals for the perception link and the time constraints for subsequent manipulator control. The research results preliminarily verify the application potential of this method in the scenario of automated twist lock disassembly. Full article
Show Figures

Figure 1

31 pages, 16969 KB  
Article
Research on Cooperative Vehicle–Infrastructure Perception Integrating Enhanced Point-Cloud Features and Spatial Attention
by Shiyang Yan, Yanfeng Wu, Zhennan Liu and Chengwei Xie
World Electr. Veh. J. 2026, 17(4), 164; https://doi.org/10.3390/wevj17040164 - 24 Mar 2026
Viewed by 111
Abstract
Vehicle–infrastructure cooperative perception (VICP) extends the sensing capability of single-vehicle systems by integrating multi-source information from onboard and roadside sensors, thereby alleviating limitations in sensing range and field-of-view coverage. However, in complex urban environments, the robustness of such systems—particularly in terms of blind-spot [...] Read more.
Vehicle–infrastructure cooperative perception (VICP) extends the sensing capability of single-vehicle systems by integrating multi-source information from onboard and roadside sensors, thereby alleviating limitations in sensing range and field-of-view coverage. However, in complex urban environments, the robustness of such systems—particularly in terms of blind-spot coverage and feature representation—is severely affected by both static and dynamic occlusions, as well as distance-induced sparsity in point cloud data. To address these challenges, a 3D object detection framework incorporating point cloud feature enhancement and spatially adaptive fusion is proposed. First, to mitigate feature degradation under sparse and occluded conditions, a Redefined Squeeze-and-Excitation Network (R-SENet) attention module is integrated into the feature encoding stage. This module employs a dual-dimensional squeeze-and-excitation mechanism operating across pillars and intra-pillar points, enabling adaptive recalibration of critical geometric features. In addition, a Feature Pyramid Backbone Network (FPB-Net) is designed to improve target representation across varying distances through multi-scale feature extraction and cross-layer aggregation. Second, to address feature heterogeneity and spatial misalignment between heterogeneous sensing agents, a Spatial Adaptive Feature Fusion (SAFF) module is introduced. By explicitly encoding the origin of features and leveraging spatial attention mechanisms, the SAFF module enables dynamic weighting and complementary fusion between fine-grained vehicle-side features and globally informative roadside semantics. Extensive experiments conducted on the DAIR-V2X benchmark and a custom dataset demonstrate that the proposed approach outperforms several state-of-the-art methods. Specifically, Average Precision (AP) scores of 0.762 and 0.694 are achieved at an IoU threshold of 0.5, while AP scores of 0.617 and 0.563 are obtained at an IoU threshold of 0.7 on the two datasets, respectively. Furthermore, the proposed framework maintains real-time inference performance, highlighting its effectiveness and practical potential for real-world deployment. Full article
(This article belongs to the Section Automated and Connected Vehicles)
Show Figures

Figure 1

27 pages, 4296 KB  
Article
Research on Lightweight Apple Detection and 3D Accurate Yield Estimation for Complex Orchard Environments
by Bangbang Chen, Xuzhe Sun, Xiangdong Liu, Baojian Ma and Feng Ding
Horticulturae 2026, 12(3), 393; https://doi.org/10.3390/horticulturae12030393 - 22 Mar 2026
Viewed by 124
Abstract
Severe foliage occlusion and dynamically changing lighting conditions in complex orchard environments pose significant challenges for visual perception systems in automated apple harvesting, including low detection accuracy, poor robustness, and insufficient real-time performance. To address these issues, this study proposes an improved lightweight [...] Read more.
Severe foliage occlusion and dynamically changing lighting conditions in complex orchard environments pose significant challenges for visual perception systems in automated apple harvesting, including low detection accuracy, poor robustness, and insufficient real-time performance. To address these issues, this study proposes an improved lightweight detection network based on YOLOv11, named YOLO-WBL, along with a precise yield estimation algorithm based on 3D point clouds, termed CLV. The YOLO-WBL network is optimized in three aspects: (1) A C3K2_WT module integrating wavelet transform is introduced into the backbone network to enhance multi-scale feature extraction capability; (2) A weighted bidirectional feature pyramid network (BiFPN) is adopted in the neck network to improve the efficiency of multi-scale feature fusion; (3) A lightweight shared convolution separated batch normalization detection head (Detect-SCGN) is designed to significantly reduce the parameter count while maintaining accuracy. Based on this detection model, the CLV algorithm deeply integrates depth camera point cloud information through 3D coordinate mapping, irregular point cloud reconstruction, and convex hull volume calculation to achieve accurate estimation of individual fruit volume and total yield. Experimental results demonstrate that: (1) The YOLO-WBL model achieves a precision of 93.8%, recall of 79.3%, and mean average precision (mAP@0.5) of 87.2% on the apple test set; (2) The model size is only 3.72 MB, a reduction of 28.87% compared to the baseline model; (3) When deployed on an NVIDIA Jetson Xavier NX edge device, its inference speed reaches 8.7 FPS, meeting real-time requirements; (4) In scenarios with an occlusion rate below 40%, the mean absolute percentage error (MAPE) of yield estimation can be controlled within 8%. Experimental validation was conducted using apple images selected from the dataset under varying lighting intensities and fruit occlusion conditions. The results demonstrate that the CLV algorithm significantly outperforms traditional average-weight-based estimation methods. This study provides an efficient, accurate, and deployable visual solution for intelligent apple harvesting and yield estimation in complex orchard environments, offering practical reference value for advancing smart orchard production. Full article
(This article belongs to the Special Issue AI for a Precision and Resilient Horticulture)
Show Figures

Figure 1

27 pages, 10027 KB  
Article
An Automatic Scoring Method for Swine Leg Structure Based on 3D Point Clouds
by Yongqi Han, Youjun Yue, Xianglong Xue, Mingyu Li, Yikai Fan, Simon X. Yang, Daniel Morris, Qifeng Li and Weihong Ma
Agriculture 2026, 16(6), 706; https://doi.org/10.3390/agriculture16060706 (registering DOI) - 22 Mar 2026
Viewed by 195
Abstract
The leg structure of swine is closely related to their robustness and longevity. Animals with sound legs generally have longer productive lifespans and higher reproductive efficiency, whereas leg defects can markedly impair performance and shorten service life. To address the high subjectivity, low [...] Read more.
The leg structure of swine is closely related to their robustness and longevity. Animals with sound legs generally have longer productive lifespans and higher reproductive efficiency, whereas leg defects can markedly impair performance and shorten service life. To address the high subjectivity, low efficiency, and poor consistency of traditional leg-structure evaluation by humans, this study developed an automatic scoring system for swine leg structure based on 3D point clouds. The hardware components of the system include the acquisition channel, a multi-view time-of-flight (ToF) depth camera array, an industrial computer, and a star-type synchronization hub. The core algorithm modules include point cloud preprocessing, leg segmentation, geometric feature extraction, and structure-based scoring. Body orientation was corrected using principal component analysis (PCA). An adaptive limb region segmentation method was proposed that combines iterative cropping with geometric verification. Two point cloud tasks were performed: key structural points were extracted via multi-scale curvature analysis, and angular and symmetry parameters of the fore- and hindlimbs were computed in the sagittal and coronal planes. Following a “classify first, then score” strategy, a nine-level linear scoring model was constructed. Field validation showed that the classification accuracy exceeded 90%, the scores were significantly negatively correlated with the degree of structural deviation, and multi-frame resampling yielded good repeatability. The processing time per animal ranged from 1.6 s to 3.0 s, which met the requirements for real-time applications. These results demonstrated that the proposed method could automatically identify and quantitatively evaluate swine leg structure, providing efficient and reliable technical support for objective selection and smart pig farming. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

26 pages, 12317 KB  
Article
Rapid Extraction of Tea Bud Phenotypic Parameters ‘In Situ’ Combining Key Point Recognition and Depth Image Fusion
by Yang Guo, Yiyong Chen, Weihao Yao, Junshu Wang, Jianlong Li, Bo Zhou, Junhong Zhao and Jinchi Tang
Agriculture 2026, 16(6), 704; https://doi.org/10.3390/agriculture16060704 - 21 Mar 2026
Viewed by 182
Abstract
Real-time measurement of tea bud phenotypes via mobile devices is constrained by model lightweighting challenges, and research on non-contact measurement of tea bud phenotypes based on key points remains largely unexplored. Information on the growth posture of tea buds is an important basis [...] Read more.
Real-time measurement of tea bud phenotypes via mobile devices is constrained by model lightweighting challenges, and research on non-contact measurement of tea bud phenotypes based on key points remains largely unexplored. Information on the growth posture of tea buds is an important basis for determining tea maturity grades, quality monitoring, and tea breeding. Therefore, this work develops a deep learning-enabled YOLOv8p-Tea model to estimate key point information of tea bud posture and automatically obtain three-dimensional point cloud information of tea buds by integrating depth information, thereby achieving in situ measurement of tea bud phenotypic parameters. Meanwhile, the model is trained and validated using a tea bud (one-bud-three-leaf) image dataset, and its effectiveness is demonstrated through experiments. Compared to the YOLOv8p-pose model, the model achieves a mAP50 of 98.3%, a P of 97%, and parameters of 0.72 M, with mAP50 and P improved by 1.5% and 1.9%, respectively, and the parameter count is reduced by 25%. To validate the accuracy of phenotypic extraction, the model was deployed on edge devices, and 30 tea buds with one bud and three leaves were randomly selected in a tea garden. The final in situ measurement results showed an MRE of 6.63%. Experimental findings indicate that the developed method is capable of not only effectively estimate tea bud posture but also accurately achieves in situ measurement of tea bud phenotypes, which holds potential applications for meeting the construction needs of smart tea gardens and optimizing tea breeding. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

19 pages, 1588 KB  
Article
Fortification of a Greek Distilled Spirit by Citrus sinensis Antioxidants Extracted Using Green Recovery via Lecithin-Based Extraction: Optimization of Extraction and Stability
by Eleni Bozinou, Vassilis Athanasiadis, Olga Stergiou, Marina Tsakiridou, Stavros I. Lalas and Arhontoula Chatzilazarou
Processes 2026, 14(6), 917; https://doi.org/10.3390/pr14060917 - 12 Mar 2026
Viewed by 383
Abstract
The sustainable valorization of citrus processing by-products represents a key challenge for the food industry, aiming to reduce waste while recovering valuable bioactive compounds. In this study, a cloud point extraction strategy was developed using soy lecithin as a natural, food-grade surfactant to [...] Read more.
The sustainable valorization of citrus processing by-products represents a key challenge for the food industry, aiming to reduce waste while recovering valuable bioactive compounds. In this study, a cloud point extraction strategy was developed using soy lecithin as a natural, food-grade surfactant to isolate phenolic antioxidants from orange juice industry residues. Response Surface Methodology was applied to two streams of orange juice by-products, to evaluate the combined effects of pH, NaCl concentration, and lecithin content on extraction efficiency, with total polyphenolic content, DPPH radical scavenging activity, and ferric reducing antioxidant power serving as response variables. Partial Least Squares (PLS) analysis was additionally employed to integrate all antioxidant responses and identify a multivariate optimum. The optimized conditions (pH 3.4, 12% NaCl, 11% lecithin) enabled maximal recovery of antioxidant constituents, highlighting the effectiveness of lecithin-based micellar systems. To assess practical applicability, the optimized extract from the oil emulsion residue (Stream A) was incorporated into tsipouro, a traditional Greek distillate, and its stability was monitored under controlled light and temperature conditions for 30 days at three concentration levels. Results demonstrated that both environmental factors significantly influenced antioxidant retention and physical stability, underscoring the importance of formulation design. Specifically, high gel concentration at 2% w/v, low temperature at 20 °C and light exposure provided the highest overall desirability for TPC, FRAP, and DPPH responses. Overall, this work introduces a green, scalable, and food-compatible extraction approach that not only supports circular economy principles but also opens new opportunities for the development of functional alcoholic beverages enriched with natural antioxidants. Full article
(This article belongs to the Special Issue Analysis and Processes of Bioactive Components in Natural Products)
Show Figures

Graphical abstract

17 pages, 8581 KB  
Article
A Fully Automated Deep Learning Pipeline for Anatomical Landmark Localization on Three-Dimensional Pelvic Surface Scans
by Woosu Choi and Jun-Su Jang
Sensors 2026, 26(6), 1760; https://doi.org/10.3390/s26061760 - 10 Mar 2026
Viewed by 286
Abstract
Accurate identification of anatomical landmarks on three-dimensional (3D) pelvic surface scans is essential for musculoskeletal assessment, yet manual procedures remain limited by operator dependence and soft tissue variability. This study presents a fully automated deep learning pipeline for localizing anatomical landmarks on the [...] Read more.
Accurate identification of anatomical landmarks on three-dimensional (3D) pelvic surface scans is essential for musculoskeletal assessment, yet manual procedures remain limited by operator dependence and soft tissue variability. This study presents a fully automated deep learning pipeline for localizing anatomical landmarks on the posterior pelvic region from raw 3D point cloud data. The pipeline integrates three modules: PelvicROINet for extracting the region of interest, PelvicAlignNet for rotation correction to standardize posture, and PelvicLandmarkNet for localizing six anatomical landmarks including the bilateral posterior superior iliac spines, bilateral iliac crests, L1, and L4. The models were trained independently with task-specific annotations and combined sequentially during inference. Under a subject-level split evaluation setting, the fully integrated system achieved a median error of 11.25 mm, demonstrating consistent localization performance across unseen subjects. Compared with manual landmark marking, the automated measurements showed improved within-visit repeatability, with reduced variability and higher intraclass correlation coefficients. The entire inference process required approximately three seconds per scan, supporting near real-time clinical applicability. These results indicate that the proposed modular framework enhances numerical consistency and robustness in surface-based pelvic landmark assessment and provides a scalable foundation for AI-assisted musculoskeletal evaluation and longitudinal monitoring. Full article
Show Figures

Figure 1

20 pages, 9101 KB  
Article
Automatic Defect Detection for Concrete Bridge Decks Using Geometric Feature Augmentation and Robust Point Cloud Learning Strategy
by Zhe Sun, Siqi Li, Minghui Huang and Qinglei Meng
Appl. Sci. 2026, 16(5), 2618; https://doi.org/10.3390/app16052618 - 9 Mar 2026
Viewed by 215
Abstract
Surface defects such as depressions, heaving, and irregular undulations frequently develop on aging concrete bridge decks under repeated traffic loading and environmental effects. Accurate and objective identification of such defects is essential for structural serviceability and safety, yet manual inspection remains labor-intensive and [...] Read more.
Surface defects such as depressions, heaving, and irregular undulations frequently develop on aging concrete bridge decks under repeated traffic loading and environmental effects. Accurate and objective identification of such defects is essential for structural serviceability and safety, yet manual inspection remains labor-intensive and subjective. This study develops a systematic framework for surface defect identification through geometric feature augmentation with a streamlined point cloud learning strategy. In practical engineering scenarios, point cloud data of concrete bridge decks can be periodically acquired via vehicle-mounted mobile laser scanning (MLS) systems and subsequently streamlined for analysis. The proposed method heightens defect sensitivity by extracting interpretable geometric descriptors, further integrating multi-scale representations to capture surface defects across varying spatial extents. Evaluated on a public point-level annotated benchmark, the proposed method clearly outperforms the same network trained with geometric coordinates only. To improve result reliability, all experiments were repeated four times with different random seeds, and the performance is reported as mean ± standard deviation. Results show that the proposed method achieves a precision of 0.597 ± 0.021 and an accuracy of 0.933 ± 0.009 under the benchmark protocol. Overall, these results demonstrate a reproducible proof of concept under controlled benchmark conditions for bridge deck surface defect segmentation, while broader cross-site and cross-sensor validation will be pursued in future work. Full article
Show Figures

Figure 1

19 pages, 3692 KB  
Article
Automated Processing and Deviation Analysis of 3D Pipeline Point Clouds Based on Geometric Features
by Shaofeng Jin, Kangrui Fu, Chengzhen Yang and Huanhuan Rui
J. Imaging 2026, 12(3), 115; https://doi.org/10.3390/jimaging12030115 - 9 Mar 2026
Viewed by 407
Abstract
To meet the strict non-contact measurement requirements for the assembly of aircraft engine pipelines and to overcome the limitations of the traditional three-dimensional laser scanning workflow, this study proposes an automated pipeline point cloud processing and deviation analysis framework. Through a standardized three-dimensional [...] Read more.
To meet the strict non-contact measurement requirements for the assembly of aircraft engine pipelines and to overcome the limitations of the traditional three-dimensional laser scanning workflow, this study proposes an automated pipeline point cloud processing and deviation analysis framework. Through a standardized three-dimensional laser scanning procedure, high-resolution pipeline point clouds are obtained and preprocessed. Based on the geometric characteristics of the pipeline, automated algorithms for point cloud feature segmentation, axis extraction, and model registration are developed. Particularly, the three-dimensional extended Douglas–Peucker (DP) algorithm is introduced to achieve efficient point cloud downsampling while retaining necessary geometric and structural features. These algorithms are fully integrated into a unified software platform, supporting one-click operation, and can automatically analyze and obtain five key types of pipeline deviations: angular deviation, radial deviation, axial deviation, roundness error, and diameter error. The platform also provides intuitive visualization effects and comprehensive report generation functions to facilitate quantitative inspection and analysis. Test results show that the proposed method significantly improves the processing efficiency and measurement reliability of complex pipeline systems. The developed framework provides a powerful practical solution for the automated geometric inspection of aircraft engine pipelines and lays a solid foundation for subsequent quality assessment tasks. Full article
Show Figures

Figure 1

23 pages, 4427 KB  
Article
Virtual Reassembly Method for Cultural Relic Fragments Based on Multi-Feature Extraction
by Jianghong Zhao, Jia Yang, Mengtian Cao, Lisha Yin, Rui Liu and Xinfeng Chang
Appl. Sci. 2026, 16(5), 2588; https://doi.org/10.3390/app16052588 - 8 Mar 2026
Viewed by 295
Abstract
The virtual reassembly of fragmented cultural relics remains a challenging task due to incomplete contours, complex fracture geometries, and the lack of reliable accuracy evaluation when ground-truth models are unavailable. To address these issues, this study proposes an automated virtual reassembly framework based [...] Read more.
The virtual reassembly of fragmented cultural relics remains a challenging task due to incomplete contours, complex fracture geometries, and the lack of reliable accuracy evaluation when ground-truth models are unavailable. To address these issues, this study proposes an automated virtual reassembly framework based on multi-feature extraction and hierarchical fragment matching. First, contour points are extracted from fragment point clouds using neighborhood roughness analysis and further refined through a Cylinder Box-based completion strategy to recover missing contour segments. Then, multiple complementary features, including Fast Point Feature Histograms (FPFHs), Heat Kernel Signatures (HKSs), and a spatial cube-based contour shape descriptor, are jointly constructed to characterize both local geometric details and global structural properties of fragments. To improve matching efficiency and robustness, a tree-based fragment retrieval strategy combined with a coarse-to-fine registration scheme is employed to identify adjacent fragments while reducing computational complexity. In addition, a pseudo-ground-truth accuracy evaluation method is introduced to quantitatively assess cumulative reassembly errors in the absence of reliable reference data. Experiments conducted on the public Buddha head dataset demonstrate that the proposed method achieves stable and visually consistent reassembly results, with a cumulative error as low as 1.58%, while significantly reducing retrieval computations compared with exhaustive matching strategies. These results indicate that the proposed framework provides a practical and verifiable solution for the automated digital restoration of fragmented cultural relics. Full article
(This article belongs to the Special Issue Non-Destructive Techniques for Heritage Conservation)
Show Figures

Figure 1

20 pages, 8261 KB  
Article
SGE-Flow: 4D mmWave Radar 3D Object Detection via Spatiotemporal Geometric Enhancement and Inter-Frame Flow
by Huajun Meng, Zijie Yu, Cheng Li, Chao Li and Xiaojun Liu
Sensors 2026, 26(5), 1679; https://doi.org/10.3390/s26051679 - 6 Mar 2026
Viewed by 334
Abstract
4D millimeter-wave radar provides a promising solution for robust perception in adverse weather. Existing detectors still struggle with sparse and noisy point clouds, and maintaining real-time inference while achieving competitive accuracy remains challenging. We propose SGE-Flow, a streamlined PointPillars-based 4D radar 3D detector [...] Read more.
4D millimeter-wave radar provides a promising solution for robust perception in adverse weather. Existing detectors still struggle with sparse and noisy point clouds, and maintaining real-time inference while achieving competitive accuracy remains challenging. We propose SGE-Flow, a streamlined PointPillars-based 4D radar 3D detector that embeds lightweight spatiotemporal geometric enhancements into the voxelization front-end. Velocity Displacement Compensation (VDC) leverages compensated radial velocity to align accumulated points in physical space and improve geometric consistency. Distribution-Aware Density (DAD) enables fast density feature extraction by estimating per-pillar density from simple statistical moments, which also restores vertical distribution cues lost during pillarization. To compensate for the absence of tangential velocity measurements, a Transformer-based Inter-frame Flow (IFF) module infers latent motion from frame-to-frame pillar occupancy changes. Evaluations on the View-of-Delft (VoD) dataset show that SGE-Flow achieves 53.23% 3D mean Average Precision (mAP) while running at 72 frames per second (FPS) on an NVIDIA RTX 3090. The proposed modules are plug-and-play and can also improve strong baselines such as MAFF-Net. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

16 pages, 5250 KB  
Article
Identification of Cypress Bark Beetle-Infested Cypress Based on LiDAR and RGB Imagery
by Ke Wu, Zhiqiang Li, Linpan Feng, Shali Shi, Liangying Zhang, Shixing Zhou, Sen Zhai and Lin Xiao
Forests 2026, 17(3), 328; https://doi.org/10.3390/f17030328 - 6 Mar 2026
Viewed by 233
Abstract
Forest pests and diseases are some of the major disturbances affecting the stability of forest ecosystems. Accurate identification of insect-infested trees is therefore crucial for assessing forest health and implementing precision forestry management. This study focuses on stand-level detection of cypress trees ( [...] Read more.
Forest pests and diseases are some of the major disturbances affecting the stability of forest ecosystems. Accurate identification of insect-infested trees is therefore crucial for assessing forest health and implementing precision forestry management. This study focuses on stand-level detection of cypress trees (Cupressus funebris Endl.) that were affected by the cypress bark beetle (Phloeosinus aubei Perris), and the framework enables individual tree segmentation, insect-infested tree detection, and stand infestation assessment. Firstly, individual trees were extracted from Light Detection and Ranging (LiDAR) point cloud data using the layer-stacking seed point algorithm. Based on the segmented tree crowns, four vegetation indices (Visible Atmospherically Resistant Index (VARI), Visible-band Difference Vegetation Index (VDVI), Red-Green Index (RGI), and Color Index of Vegetation Extraction (CIVE)) were calculated from Unmanned Aerial Vehicle (UAV) RGB imagery. Insect-infested cypress trees were extracted through threshold segmentation. Through visual interpretation, the optimal vegetation index was determined and the infestation rate at the stand level was calculated. Based on the above framework, a total of 1368 trees were identified in the cypress stand, with a segmentation Precision of 82.51%, a Recall of 80.00%, and an F1-score of 81.24%. RGI achieved the best performance (Precision = 100.00%, Recall = 86.96%, F1-score = 93.02%) and identified 20 infested trees, accounting for 1.46% of the cypress stand. Supplementary experiments further confirm the superiority of the RGI index and the μ ± 2σ thresholding method. These results demonstrate that the proposed method enables rapid detection of the infested cypress trees, effective monitoring of stand health and infestation severity, thereby supporting informed decision-making in pest control and forest management. Full article
Show Figures

Figure 1

Back to TopTop