Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,437)

Search Parameters:
Keywords = Point Clouds

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 4314 KiB  
Article
Panoptic Plant Recognition in 3D Point Clouds: A Dual-Representation Learning Approach with the PP3D Dataset
by Lin Zhao, Sheng Wu, Jiahao Fu, Shilin Fang, Shan Liu and Tengping Jiang
Remote Sens. 2025, 17(15), 2673; https://doi.org/10.3390/rs17152673 (registering DOI) - 2 Aug 2025
Abstract
The advancement of Artificial Intelligence (AI) has significantly accelerated progress across various research domains, with growing interest in plant science due to its substantial economic potential. However, the integration of AI with digital vegetation analysis remains underexplored, largely due to the absence of [...] Read more.
The advancement of Artificial Intelligence (AI) has significantly accelerated progress across various research domains, with growing interest in plant science due to its substantial economic potential. However, the integration of AI with digital vegetation analysis remains underexplored, largely due to the absence of large-scale, real-world plant datasets, which are crucial for advancing this field. To address this gap, we introduce the PP3D dataset—a meticulously labeled collection of about 500 potted plants represented as 3D point clouds, featuring fine-grained annotations for approximately 20 species. The PP3D dataset provides 3D phenotypic data for about 20 plant species spanning model organisms (e.g., Arabidopsis thaliana), potted plants (e.g., Foliage plants, Flowering plants), and horticultural plants (e.g., Solanum lycopersicum), covering most of the common important plant species. Leveraging this dataset, we propose the panoptic plant recognition task, which combines semantic segmentation (stems and leaves) with leaf instance segmentation. To tackle this challenge, we present SCNet, a novel dual-representation learning network designed specifically for plant point cloud segmentation. SCNet integrates two key branches: a cylindrical feature extraction branch for robust spatial encoding and a sequential slice feature extraction branch for detailed structural analysis. By efficiently propagating features between these representations, SCNet achieves superior flexibility and computational efficiency, establishing a new baseline for panoptic plant recognition and paving the way for future AI-driven research in plant science. Full article
Show Figures

Figure 1

20 pages, 5647 KiB  
Article
Research on the Improved ICP Algorithm for LiDAR Point Cloud Registration
by Honglei Yuan, Guangyun Li, Li Wang and Xiangfei Li
Sensors 2025, 25(15), 4748; https://doi.org/10.3390/s25154748 (registering DOI) - 1 Aug 2025
Abstract
Over three decades of research has been undertaken on point cloud registration algorithms, resulting in mature theoretical frameworks and methodologies. However, among the numerous registration techniques used, the impact of point cloud scanning quality on registration outcomes has rarely been addressed. In most [...] Read more.
Over three decades of research has been undertaken on point cloud registration algorithms, resulting in mature theoretical frameworks and methodologies. However, among the numerous registration techniques used, the impact of point cloud scanning quality on registration outcomes has rarely been addressed. In most engineering and industrial measurement applications, the accuracy and density of LiDAR point clouds are highly dependent on laser scanners, leading to significant variability that critically affects registration quality. Key factors influencing point cloud accuracy include scanning distance, incidence angle, and the surface characteristics of the target. Notably, in short-range scanning scenarios, incidence angle emerges as the dominant error source. Building on this insight, this study systematically investigates the relationship between scanning incidence angles and point cloud quality. We propose an incident-angle-dependent weighting function for point cloud observations, and further develop an improved weighted Iterative Closest Point (ICP) registration algorithm. Experimental results demonstrate that the proposed method achieves approximately 30% higher registration accuracy compared to traditional ICP algorithms and a 10% improvement over Faro SCENE’s proprietary solution. Full article
Show Figures

Figure 1

19 pages, 1408 KiB  
Article
Self-Supervised Learning of End-to-End 3D LiDAR Odometry for Urban Scene Modeling
by Shuting Chen, Zhiyong Wang, Chengxi Hong, Yanwen Sun, Hong Jia and Weiquan Liu
Remote Sens. 2025, 17(15), 2661; https://doi.org/10.3390/rs17152661 (registering DOI) - 1 Aug 2025
Abstract
Accurate and robust spatial perception is fundamental for dynamic 3D city modeling and urban environmental sensing. High-resolution remote sensing data, particularly LiDAR point clouds, are pivotal for these tasks due to their lighting invariance and precise geometric information. However, processing and aligning sequential [...] Read more.
Accurate and robust spatial perception is fundamental for dynamic 3D city modeling and urban environmental sensing. High-resolution remote sensing data, particularly LiDAR point clouds, are pivotal for these tasks due to their lighting invariance and precise geometric information. However, processing and aligning sequential LiDAR point clouds in complex urban environments presents significant challenges: traditional point-based or feature-matching methods are often sensitive to urban dynamics (e.g., moving vehicles and pedestrians) and struggle to establish reliable correspondences. While deep learning offers solutions, current approaches for point cloud alignment exhibit key limitations: self-supervised losses often neglect inherent alignment uncertainties, and supervised methods require costly pixel-level correspondence annotations. To address these challenges, we propose UnMinkLO-Net, an end-to-end self-supervised LiDAR odometry framework. Our method is as follows: (1) we efficiently encode 3D point cloud structures using voxel-based sparse convolution, and (2) we model inherent alignment uncertainty via covariance matrices, enabling novel self-supervised loss based on uncertainty modeling. Extensive evaluations on the KITTI urban dataset demonstrate UnMinkLO-Net’s effectiveness in achieving highly accurate point cloud registration. Our self-supervised approach, eliminating the need for manual annotations, provides a powerful foundation for processing and analyzing LiDAR data within multi-sensor urban sensing frameworks. Full article
Show Figures

Figure 1

28 pages, 4026 KiB  
Article
Multi-Trait Phenotypic Analysis and Biomass Estimation of Lettuce Cultivars Based on SFM-MVS
by Tiezhu Li, Yixue Zhang, Lian Hu, Yiqiu Zhao, Zongyao Cai, Tingting Yu and Xiaodong Zhang
Agriculture 2025, 15(15), 1662; https://doi.org/10.3390/agriculture15151662 - 1 Aug 2025
Abstract
To address the problems of traditional methods that rely on destructive sampling, the poor adaptability of fixed equipment, and the susceptibility of single-view angle measurements to occlusions, a non-destructive and portable device for three-dimensional phenotyping and biomass detection in lettuce was developed. Based [...] Read more.
To address the problems of traditional methods that rely on destructive sampling, the poor adaptability of fixed equipment, and the susceptibility of single-view angle measurements to occlusions, a non-destructive and portable device for three-dimensional phenotyping and biomass detection in lettuce was developed. Based on the Structure-from-Motion Multi-View Stereo (SFM-MVS) algorithms, a high-precision three-dimensional point cloud model was reconstructed from multi-view RGB image sequences, and 12 phenotypic parameters, such as plant height, crown width, were accurately extracted. Through regression analyses of plant height, crown width, and crown height, and the R2 values were 0.98, 0.99, and 0.99, respectively, the RMSE values were 2.26 mm, 1.74 mm, and 1.69 mm, respectively. On this basis, four biomass prediction models were developed using Adaptive Boosting (AdaBoost), Support Vector Regression (SVR), Gradient Boosting Decision Tree (GBDT), and Random Forest Regression (RFR). The results indicated that the RFR model based on the projected convex hull area, point cloud convex hull surface area, and projected convex hull perimeter performed the best, with an R2 of 0.90, an RMSE of 2.63 g, and an RMSEn of 9.53%, indicating that the RFR was able to accurately simulate lettuce biomass. This research achieves three-dimensional reconstruction and accurate biomass prediction of facility lettuce, and provides a portable and lightweight solution for facility crop growth detection. Full article
(This article belongs to the Section Crop Production)
Show Figures

Figure 1

25 pages, 11545 KiB  
Article
Workpiece Coordinate System Measurement for a Robotic Timber Joinery Workflow
by Francisco Quitral-Zapata, Rodrigo García-Alvarado, Alejandro Martínez-Rocamora and Luis Felipe González-Böhme
Buildings 2025, 15(15), 2712; https://doi.org/10.3390/buildings15152712 (registering DOI) - 31 Jul 2025
Abstract
Robotic timber joinery demands integrated, adaptive methods to compensate for the inherent dimensional variability of wood. We introduce a seamless robotic workflow to enhance the measurement accuracy of the Workpiece Coordinate System (WCS). The approach leverages a Zivid 3D camera mounted in an [...] Read more.
Robotic timber joinery demands integrated, adaptive methods to compensate for the inherent dimensional variability of wood. We introduce a seamless robotic workflow to enhance the measurement accuracy of the Workpiece Coordinate System (WCS). The approach leverages a Zivid 3D camera mounted in an eye-in-hand configuration on a KUKA industrial robot. The proposed algorithm applies a geometric method that strategically crops the point cloud and fits planes to the workpiece surfaces to define a reference frame, calculate the corresponding transformation between coordinate systems, and measure the cross-section of the workpiece. This enables reliable toolpath generation by dynamically updating WCS and effectively accommodating real-world geometric deviations in timber components. The workflow includes camera-to-robot calibration, point cloud acquisition, robust detection of workpiece features, and precise alignment of the WCS. Experimental validation confirms that the proposed method is efficient and improves milling accuracy. By dynamically identifying the workpiece geometry, the system successfully addresses challenges posed by irregular timber shapes, resulting in higher accuracy for timber joints. This method contributes to advanced manufacturing strategies in robotic timber construction and supports the processing of diverse workpiece geometries, with potential applications in civil engineering for building construction through the precise fabrication of structural timber components. Full article
(This article belongs to the Special Issue Architectural Design Supported by Information Technology: 2nd Edition)
Show Figures

Figure 1

30 pages, 7472 KiB  
Article
Small but Mighty: A Lightweight Feature Enhancement Strategy for LiDAR Odometry in Challenging Environments
by Jiaping Chen, Kebin Jia and Zhihao Wei
Remote Sens. 2025, 17(15), 2656; https://doi.org/10.3390/rs17152656 (registering DOI) - 31 Jul 2025
Abstract
LiDAR-based Simultaneous Localization and Mapping (SLAM) serves as a fundamental technology for autonomous navigation. However, in complex environments, LiDAR odometry often experience degraded localization accuracy and robustness. This paper proposes a computationally efficient enhancement strategy for LiDAR odometry, which improves system performance by [...] Read more.
LiDAR-based Simultaneous Localization and Mapping (SLAM) serves as a fundamental technology for autonomous navigation. However, in complex environments, LiDAR odometry often experience degraded localization accuracy and robustness. This paper proposes a computationally efficient enhancement strategy for LiDAR odometry, which improves system performance by reinforcing high-quality features throughout the optimization process. For non-ground features, the method employs statistical geometric analysis to identify stable points and incorporates a contribution-weighted optimization scheme to strengthen their impact in point-to-plane and point-to-line constraints. In parallel, for ground features, locally stable planar surfaces are fitted to replace discrete point correspondences, enabling more consistent point-to-plane constraint formulation during ground registration. Experimental results on the KITTI and M2DGR datasets demonstrated that the proposed method significantly improves localization accuracy and system robustness, while preserving real-time performance with minimal computational overhead. The performance gains were particularly notable in scenarios dominated by unstructured environments. Full article
(This article belongs to the Special Issue Laser Scanning in Environmental and Engineering Applications)
Show Figures

Figure 1

15 pages, 2290 KiB  
Article
Research on Automatic Detection Method of Coil in Unmanned Reservoir Area Based on LiDAR
by Yang Liu, Meiqin Liang, Xiaozhan Li, Xuejun Zhang, Junqi Yuan and Dong Xu
Processes 2025, 13(8), 2432; https://doi.org/10.3390/pr13082432 - 31 Jul 2025
Abstract
The detection of coils in reservoir areas is part of the environmental perception technology of unmanned cranes. In order to improve the perception ability of unmanned cranes to include environmental information in reservoir areas, a method of automatic detection of coils based on [...] Read more.
The detection of coils in reservoir areas is part of the environmental perception technology of unmanned cranes. In order to improve the perception ability of unmanned cranes to include environmental information in reservoir areas, a method of automatic detection of coils based on two-dimensional LiDAR dynamic scanning is proposed, which realizes the detection of the position and attitude of coils in reservoir areas. This algorithm realizes map reconstruction of 3D point cloud by fusing LiDAR point cloud data and the motion position information of intelligent cranes. Additionally, a processing method based on histogram statistical analysis and 3D normal curvature estimation is proposed to solve the problem of over-segmentation and under-segmentation in 3D point cloud segmentation. Finally, for segmented point cloud clusters, coil models are fitted by the RANSAC method to identify their position and attitude. The accuracy, recall, and F1 score of the detection model are all higher than 0.91, indicating that the model has a good recognition effect. Full article
Show Figures

Figure 1

21 pages, 8446 KiB  
Article
Extraction of Corrosion Damage Features of Serviced Cable Based on Three-Dimensional Point Cloud Technology
by Tong Zhu, Shoushan Cheng, Haifang He, Kun Feng and Jinran Zhu
Materials 2025, 18(15), 3611; https://doi.org/10.3390/ma18153611 (registering DOI) - 31 Jul 2025
Abstract
The corrosion of high-strength steel wires is a key factor impacting the durability and reliability of cable-stayed bridges. In this study, the corrosion pit features on a high-strength steel wire, which had been in service for 27 years, were extracted and modeled using [...] Read more.
The corrosion of high-strength steel wires is a key factor impacting the durability and reliability of cable-stayed bridges. In this study, the corrosion pit features on a high-strength steel wire, which had been in service for 27 years, were extracted and modeled using three-dimensional point cloud data obtained through 3D surface scanning. The Otsu method was applied for image binarization, and each corrosion pit was geometrically represented as an ellipse. Key pit parameters—including length, width, depth, aspect ratio, and a defect parameter—were statistically analyzed. Results of the Kolmogorov–Smirnov (K–S) test at a 95% confidence level indicated that the directional angle component (θ) did not conform to any known probability distribution. In contrast, the pit width (b) and defect parameter (Φ) followed a generalized extreme value distribution, the aspect ratio (b/a) matched a Beta distribution, and both the pit length (a) and depth (d) were best described by a Gaussian mixture model. The obtained results provide valuable reference for assessing the stress state, in-service performance, and predicted remaining service life of operational stay cables. Full article
(This article belongs to the Section Construction and Building Materials)
Show Figures

Figure 1

28 pages, 5699 KiB  
Article
Multi-Modal Excavator Activity Recognition Using Two-Stream CNN-LSTM with RGB and Point Cloud Inputs
by Hyuk Soo Cho, Kamran Latif, Abubakar Sharafat and Jongwon Seo
Appl. Sci. 2025, 15(15), 8505; https://doi.org/10.3390/app15158505 (registering DOI) - 31 Jul 2025
Viewed by 41
Abstract
Recently, deep learning algorithms have been increasingly applied in construction for activity recognition, particularly for excavators, to automate processes and enhance safety and productivity through continuous monitoring of earthmoving activities. These deep learning algorithms analyze construction videos to classify excavator activities for earthmoving [...] Read more.
Recently, deep learning algorithms have been increasingly applied in construction for activity recognition, particularly for excavators, to automate processes and enhance safety and productivity through continuous monitoring of earthmoving activities. These deep learning algorithms analyze construction videos to classify excavator activities for earthmoving purposes. However, previous studies have solely focused on single-source external videos, which limits the activity recognition capabilities of the deep learning algorithm. This paper introduces a novel multi-modal deep learning-based methodology for recognizing excavator activities, utilizing multi-stream input data. It processes point clouds and RGB images using the two-stream long short-term memory convolutional neural network (CNN-LSTM) method to extract spatiotemporal features, enabling the recognition of excavator activities. A comprehensive dataset comprising 495,000 video frames of synchronized RGB and point cloud data was collected across multiple construction sites under varying conditions. The dataset encompasses five key excavator activities: Approach, Digging, Dumping, Idle, and Leveling. To assess the effectiveness of the proposed method, the performance of the two-stream CNN-LSTM architecture is compared with that of single-stream CNN-LSTM models on the same RGB and point cloud datasets, separately. The results demonstrate that the proposed multi-stream approach achieved an accuracy of 94.67%, outperforming existing state-of-the-art single-stream models, which achieved 90.67% accuracy for the RGB-based model and 92.00% for the point cloud-based model. These findings underscore the potential of the proposed activity recognition method, making it highly effective for automatic real-time monitoring of excavator activities, thereby laying the groundwork for future integration into digital twin systems for proactive maintenance and intelligent equipment management. Full article
(This article belongs to the Special Issue AI-Based Machinery Health Monitoring)
Show Figures

Figure 1

28 pages, 2174 KiB  
Article
Validating Lava Tube Stability Through Finite Element Analysis of Real-Scene 3D Models
by Jiawang Wang, Zhizhong Kang, Chenming Ye, Haiting Yang and Xiaoman Qi
Electronics 2025, 14(15), 3062; https://doi.org/10.3390/electronics14153062 (registering DOI) - 31 Jul 2025
Viewed by 49
Abstract
The structural stability of lava tubes is a critical factor for their potential use in lunar base construction. Previous studies could not reflect the details of lava tube boundaries and perform accurate mechanical analysis. To this end, this study proposes a robust method [...] Read more.
The structural stability of lava tubes is a critical factor for their potential use in lunar base construction. Previous studies could not reflect the details of lava tube boundaries and perform accurate mechanical analysis. To this end, this study proposes a robust method to construct a high-precision, real-scene 3D model based on ground lava tube point cloud data. By employing finite element analysis, this study investigated the impact of real-world cross-sectional geometry, particularly the aspect ratio, on structural stability under surface pressure simulating meteorite impacts. A high-precision 3D reconstruction was achieved using UAV-mounted LiDAR and SLAM-based positioning systems, enabling accurate geometric capture of lava tube profiles. The original point cloud data were processed to extract cross-sections, which were then classified by their aspect ratios for analysis. Experimental results confirmed that the aspect ratio is a significant factor in determining stability. Crucially, unlike the monotonic trends often suggested by idealized models, analysis of real-world geometries revealed that the greatest deformation and structural vulnerability occur in sections with an aspect ratio between 0.5 and 0.6. For small lava tubes buried 3 m deep, the ground pressure they can withstand does not exceed 6 GPa. This process helps identify areas with weaker load-bearing capacity. The analysis demonstrated that a realistic 3D modeling approach provides a more accurate and reliable assessment of lava tube stability. This framework is vital for future evaluations of lunar lava tubes as safe habitats and highlights that complex, real-world geometry can lead to non-intuitive structural weaknesses not predicted by simplified models. Full article
Show Figures

Figure 1

15 pages, 2538 KiB  
Article
Dynamic Obstacle Perception Technology for UAVs Based on LiDAR
by Wei Xia, Feifei Song and Zimeng Peng
Drones 2025, 9(8), 540; https://doi.org/10.3390/drones9080540 (registering DOI) - 31 Jul 2025
Viewed by 87
Abstract
With the widespread application of small quadcopter drones in the military and civilian fields, the security challenges they face are gradually becoming apparent. Especially in dynamic environments, the rapidly changing conditions make the flight of drones more complex. To address the computational limitations [...] Read more.
With the widespread application of small quadcopter drones in the military and civilian fields, the security challenges they face are gradually becoming apparent. Especially in dynamic environments, the rapidly changing conditions make the flight of drones more complex. To address the computational limitations of small quadcopter drones and meet the demands of obstacle perception in dynamic environments, a LiDAR-based obstacle perception algorithm is proposed. First, accumulation, filtering, and clustering processes are carried out on the LiDAR point cloud data to complete the segmentation and extraction of point cloud obstacles. Then, an obstacle motion/static discrimination algorithm based on three-dimensional point motion attributes is developed to classify dynamic and static point clouds. Finally, oriented bounding box (OBB) detection is employed to simplify the representation of the spatial position and shape of dynamic point cloud obstacles, and motion estimation is achieved by tracking the OBB parameters using a Kalman filter. Simulation experiments demonstrate that this method can ensure a dynamic obstacle detection frequency of 10 Hz and successfully detect multiple dynamic obstacles in the environment. Full article
Show Figures

Figure 1

31 pages, 11269 KiB  
Review
Advancements in Semantic Segmentation of 3D Point Clouds for Scene Understanding Using Deep Learning
by Hafsa Benallal, Nadine Abdallah Saab, Hamid Tairi, Ayman Alfalou and Jamal Riffi
Technologies 2025, 13(8), 322; https://doi.org/10.3390/technologies13080322 - 30 Jul 2025
Viewed by 293
Abstract
Three-dimensional semantic segmentation is a fundamental problem in computer vision with a wide range of applications in autonomous driving, robotics, and urban scene understanding. The task involves assigning semantic labels to each point in a 3D point cloud, a data representation that is [...] Read more.
Three-dimensional semantic segmentation is a fundamental problem in computer vision with a wide range of applications in autonomous driving, robotics, and urban scene understanding. The task involves assigning semantic labels to each point in a 3D point cloud, a data representation that is inherently unstructured, irregular, and spatially sparse. In recent years, deep learning has become the dominant framework for addressing this task, leading to a broad variety of models and techniques designed to tackle the unique challenges posed by 3D data. This survey presents a comprehensive overview of deep learning methods for 3D semantic segmentation. We organize the literature into a taxonomy that distinguishes between supervised and unsupervised approaches. Supervised methods are further classified into point-based, projection-based, voxel-based, and hybrid architectures, while unsupervised methods include self-supervised learning strategies, generative models, and implicit representation techniques. In addition to presenting and categorizing these approaches, we provide a comparative analysis of their performance on widely used benchmark datasets, discuss key challenges such as generalization, model transferability, and computational efficiency, and examine the limitations of current datasets. The survey concludes by identifying potential directions for future research in this rapidly evolving field. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

16 pages, 5301 KiB  
Article
TSINet: A Semantic and Instance Segmentation Network for 3D Tomato Plant Point Clouds
by Shanshan Ma, Xu Lu and Liang Zhang
Appl. Sci. 2025, 15(15), 8406; https://doi.org/10.3390/app15158406 - 29 Jul 2025
Viewed by 116
Abstract
Accurate organ-level segmentation is essential for achieving high-throughput, non-destructive, and automated plant phenotyping. To address the challenge of intelligent acquisition of phenotypic parameters in tomato plants, we propose TSINet, an end-to-end dual-task segmentation network designed for effective and precise semantic labeling and instance [...] Read more.
Accurate organ-level segmentation is essential for achieving high-throughput, non-destructive, and automated plant phenotyping. To address the challenge of intelligent acquisition of phenotypic parameters in tomato plants, we propose TSINet, an end-to-end dual-task segmentation network designed for effective and precise semantic labeling and instance recognition of tomato point clouds, based on the Pheno4D dataset. TSINet adopts an encoder–decoder architecture, where a shared encoder incorporates four Geometry-Aware Adaptive Feature Extraction Blocks (GAFEBs) to effectively capture local structures and geometric relationships in raw point clouds. Two parallel decoder branches are employed to independently decode shared high-level features for the respective segmentation tasks. Additionally, a Dual Attention-Based Feature Enhancement Module (DAFEM) is introduced to further enrich feature representations. The experimental results demonstrate that TSINet achieves superior performance in both semantic and instance segmentation, particularly excelling in challenging categories such as stems and large-scale instances. Specifically, TSINet achieves 97.00% mean precision, 96.17% recall, 96.57% F1-score, and 93.43% IoU in semantic segmentation and 81.54% mPrec, 81.69% mRec, 81.60% mCov, and 86.40% mWCov in instance segmentation. Compared with state-of-the-art methods, TSINet achieves balanced improvements across all metrics, significantly reducing false positives and false negatives while enhancing spatial completeness and segmentation accuracy. Furthermore, we conducted ablation studies and generalization tests to systematically validate the effectiveness of each TSINet component and the overall robustness of the model. This study provides an effective technological approach for high-throughput automated phenotyping of tomato plants, contributing to the advancement of intelligent agricultural management. Full article
Show Figures

Figure 1

23 pages, 8942 KiB  
Article
Optical and SAR Image Registration in Equatorial Cloudy Regions Guided by Automatically Point-Prompted Cloud Masks
by Yifan Liao, Shuo Li, Mingyang Gao, Shizhong Li, Wei Qin, Qiang Xiong, Cong Lin, Qi Chen and Pengjie Tao
Remote Sens. 2025, 17(15), 2630; https://doi.org/10.3390/rs17152630 - 29 Jul 2025
Viewed by 203
Abstract
The equator’s unique combination of high humidity and temperature renders optical satellite imagery highly susceptible to persistent cloud cover. In contrast, synthetic aperture radar (SAR) offers a robust alternative due to its ability to penetrate clouds with microwave imaging. This study addresses the [...] Read more.
The equator’s unique combination of high humidity and temperature renders optical satellite imagery highly susceptible to persistent cloud cover. In contrast, synthetic aperture radar (SAR) offers a robust alternative due to its ability to penetrate clouds with microwave imaging. This study addresses the challenges of cloud-induced data gaps and cross-sensor geometric biases by proposing an advanced optical and SAR image-matching framework specifically designed for cloud-prone equatorial regions. We use a prompt-driven visual segmentation model with automatic prompt point generation to produce cloud masks that guide cross-modal feature-matching and joint adjustment of optical and SAR data. This process results in a comprehensive digital orthophoto map (DOM) with high geometric consistency, retaining the fine spatial detail of optical data and the all-weather reliability of SAR. We validate our approach across four equatorial regions using five satellite platforms with varying spatial resolutions and revisit intervals. Even in areas with more than 50 percent cloud cover, our method maintains sub-pixel edging accuracy under manual check points and delivers comprehensive DOM products, establishing a reliable foundation for downstream environmental monitoring and ecosystem analysis. Full article
Show Figures

Figure 1

22 pages, 5896 KiB  
Article
Point Cloud Generation Method Based on Dual-Prism Scanning with Multi-Parameter Optimization
by Yuanfeng Zhao, Zhen Zheng and Hong Chen
Photonics 2025, 12(8), 764; https://doi.org/10.3390/photonics12080764 - 29 Jul 2025
Viewed by 153
Abstract
This study addresses two critical challenges in biprism-based laser scanning systems: the lack of a comprehensive mathematical framework linking prism parameters to scanning performance, and unresolved theoretical gaps regarding parameter effects on point cloud quality. We propose a multi-parameter optimization method for point [...] Read more.
This study addresses two critical challenges in biprism-based laser scanning systems: the lack of a comprehensive mathematical framework linking prism parameters to scanning performance, and unresolved theoretical gaps regarding parameter effects on point cloud quality. We propose a multi-parameter optimization method for point cloud generation using dual-prism scanning. By establishing a beam pointing mathematical model, we systematically analyze how prism wedge angles, refractive indices, rotation speed ratios, and placement configurations influence scanning performance, revealing their coupled effects on deflection angles, azimuth control, and coverage. The non-paraxial ray tracing method combined with the Möller–Trumbore algorithm enables efficient point cloud simulation. Experimental results demonstrate that our optimized parameters significantly enhance point cloud density, uniformity, and target feature integrity while overcoming limitations of traditional database construction methods. This work provides both theoretical foundations and practical solutions for high-precision 3D reconstruction in high-speed rendezvous scenarios such as missile-borne laser fuzes, offering advantages in cost-effectiveness and operational reliability. Full article
Show Figures

Figure 1

Back to TopTop