Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (27)

Search Parameters:
Keywords = weighted RANSAC

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 35493 KB  
Article
A Novel Point-Cloud-Based Alignment Method for Shelling Tool Pose Estimation in Aluminum Electrolysis Workshop
by Zhenggui Jiang, Yi Long, Yonghong Long, Weihua Fang and Xin Li
Information 2025, 16(9), 788; https://doi.org/10.3390/info16090788 - 10 Sep 2025
Viewed by 235
Abstract
In aluminum electrolysis workshops, real-time pose perception of shelling heads is crucial to process accuracy and equipment safety. However, due to high temperatures, smoke, dust, and metal obstructions, traditional pose estimation methods struggle to achieve high accuracy and robustness. At the same time, [...] Read more.
In aluminum electrolysis workshops, real-time pose perception of shelling heads is crucial to process accuracy and equipment safety. However, due to high temperatures, smoke, dust, and metal obstructions, traditional pose estimation methods struggle to achieve high accuracy and robustness. At the same time, the continuous movement of the shelling head and the similar geometric structures around it make it hard to match point-clouds, which makes it even harder to track the position and orientation. In response to the above challenges, we propose a multi-stage optimization pose estimation algorithm based on point-cloud processing. This method is designed for dynamic perception tasks of three-dimensional components in complex industrial scenarios. First stage improves the accuracy of initial matching by combining a weighted 3D Hough voting and adaptive threshold mechanism with an improved FPFH feature matching strategy. In the second stage, by integrating FPFH and PCA feature information, a stable initial registration is achieved using the RANSAC-IA coarse registration framework. In the third stage, we designed an improved ICP algorithm that effectively improved the convergence of the registration process and the accuracy of the final pose estimation. The experimental results show that the proposed method has good robustness and adaptability in a real electrolysis workshop environment, and can achieve pose estimation of the shelling head in the presence of noise, occlusion, and complex background interference. Full article
(This article belongs to the Special Issue Advances in Computer Graphics and Visual Computing)
Show Figures

Figure 1

14 pages, 1003 KB  
Article
A Linear Fitting Algorithm Based on Modified Random Sample Consensus
by Yujin Min, Yun Tang, Hao Chen and Faquan Zhang
Appl. Sci. 2025, 15(11), 6370; https://doi.org/10.3390/app15116370 - 5 Jun 2025
Viewed by 670
Abstract
When performing linear fitting on datasets containing outliers, common algorithms may face problems like inadequate fitting accuracy. We propose a linear fitting algorithm based on Locality-Sensitive Hashing (LSH) and Random Sample Consensus (RANSAC). Our algorithm combines the efficient similarity search capabilities of the [...] Read more.
When performing linear fitting on datasets containing outliers, common algorithms may face problems like inadequate fitting accuracy. We propose a linear fitting algorithm based on Locality-Sensitive Hashing (LSH) and Random Sample Consensus (RANSAC). Our algorithm combines the efficient similarity search capabilities of the LSH algorithm with the robust fitting mechanism of RANSAC. With proper hash functions designed, similar data points are mapped to the same hash bucket, thereby enabling the efficient identification and removal of outliers. RANSAC is then used to fit the model parameters of the processed dataset. The optimal parameters for the linear model are obtained after multiple iterative processes. This algorithm significantly reduces the influence of outliers on the dataset, resulting in improved fitting accuracy and enhanced robustness. Experimental results demonstrate that the proposed improved RANSAC linear fitting algorithm outperforms the Weighted Least Squares, traditional RANSAC, and Maximum Likelihood Estimation methods, achieving a reduction in the sum of squared residuals by 29%, 16%, and 8%, respectively. Full article
Show Figures

Figure 1

19 pages, 43835 KB  
Article
A Stereo Disparity Map Refinement Method Without Training Based on Monocular Segmentation and Surface Normal
by Haoxuan Sun and Taoyang Wang
Remote Sens. 2025, 17(9), 1587; https://doi.org/10.3390/rs17091587 - 30 Apr 2025
Viewed by 1012
Abstract
Stereo disparity estimation is an essential component in computer vision and photogrammetry with many applications. However, there is a lack of real-world large datasets and large-scale models in the domain. Inspired by recent advances in the foundation model for image segmentation, we explore [...] Read more.
Stereo disparity estimation is an essential component in computer vision and photogrammetry with many applications. However, there is a lack of real-world large datasets and large-scale models in the domain. Inspired by recent advances in the foundation model for image segmentation, we explore the RANSAC disparity refinement based on zero-shot monocular surface normal prediction and SAM segmentation masks, which combine stereo matching models and advanced monocular large-scale vision models. The disparity refinement problem is formulated as follows: extracting geometric structures based on SAM masks and surface normal prediction, building disparity map hypotheses of the geometric structures, and selecting the hypotheses-based weighted RANSAC method. We believe that after obtaining geometry structures, even if there is only a part of the correct disparity in the geometry structure, the entire correct geometry structure can be reconstructed based on the prior geometry structure. Our method can best optimize the results of traditional models such as SGM or deep learning models such as MC-CNN. The model obtains 15.48% D1-error without training on the US3D dataset and obtains 6.09% bad 2.0 error and 3.65% bad 4.0 error on the Middlebury dataset. The research helps to promote the development of scene and geometric structure understanding in stereo disparity estimation and the application of combining advanced large-scale monocular vision models with stereo matching methods. Full article
Show Figures

Figure 1

18 pages, 4080 KB  
Article
A Feature Extraction Algorithm for Corner Cracks in Slabs Based on Multi-Scale Adaptive Gradient Descent
by Kai Zeng, Zibo Xia, Junlei Qian, Xueqiang Du, Pengcheng Xiao and Liguang Zhu
Metals 2025, 15(3), 324; https://doi.org/10.3390/met15030324 - 17 Mar 2025
Viewed by 503
Abstract
Cracks at the corners of casting billets have a small morphology and rough surfaces. Corner cracks are generally irregular, with a depth of about 0.2–5 mm and a width of about 0.5–3 mm. It is difficult to detect the depth of cracks and [...] Read more.
Cracks at the corners of casting billets have a small morphology and rough surfaces. Corner cracks are generally irregular, with a depth of about 0.2–5 mm and a width of about 0.5–3 mm. It is difficult to detect the depth of cracks and the three-dimensional morphological characteristics. The severity of cracks is hard to evaluate with traditional inspection methods. To effectively extract the topographic features of corner cracks, a multi-scale surface crack feature extraction algorithm, based on weighted adaptive gradient descent, was proposed. Firstly, the point cloud data of the corners of the billet were collected by the three-dimensional visual inspection platform. The point cloud neighborhood density was calculated using the k-nearest neighbor method; then the weighted covariance matrix was used to calculate the normal rate of change. Secondly, the adaptive attenuation rate, based on normal change, was fused with the density weight, which can calculate the Gaussian weight in regard to the neighborhood. Gaussian weights were used to obtain the gradient changes between point clouds to acquire the multi-scale morphological features of the crack. Finally, the interference caused by surface and boundary effects was eliminated by DBSCAN density clustering. The complete three-dimensional morphology characteristics of the crack were obtained. The experimental results reveal that the precision rate, recall rate, and F-value of the improved algorithm are 96.68%, 91.32%, and 93.92%, respectively, which are superior to the results from the RANSAC and other mainstream algorithms. The three-dimensional morphological characteristics of corner cracks can be effectively extracted using the improved algorithm, which provides a basis for judging the severity of the defect. Full article
Show Figures

Figure 1

28 pages, 14863 KB  
Article
Band Weight-Optimized BiGRU Model for Large-Area Bathymetry Inversion Using Satellite Images
by Xiaotao Xi, Gongju Guo and Jianxiang Gu
J. Mar. Sci. Eng. 2025, 13(2), 246; https://doi.org/10.3390/jmse13020246 - 27 Jan 2025
Cited by 1 | Viewed by 903
Abstract
Currently, using satellite images combined with deep learning models has become an efficient approach for bathymetry inversion. However, only limited bands are usually used for bathymetry inversion in most methods, and they rarely applied for large-area bathymetry inversion (it is important for methods [...] Read more.
Currently, using satellite images combined with deep learning models has become an efficient approach for bathymetry inversion. However, only limited bands are usually used for bathymetry inversion in most methods, and they rarely applied for large-area bathymetry inversion (it is important for methods to be used in operational environments). Aiming to utilize all band information of satellite optical image data, this paper first proposes the Band Weight-Optimized Bidirectional Gated Recurrent Unit (BWO_BiGRU) model for bathymetry inversion. To further improve the accuracy, the Stumpf model is incorporated into the BWO_BiGRU model to form another new model—Band Weight-Optimized and Stumpf’s Bidirectional Gated Recurrent Unit (BWOS_BiGRU). In addition, using RANSAC to accurately extract in situ water depth points from the ICESat-2 dataset can accelerate computation speed and improve convergence efficiency compared to DBSCAN. This study was conducted in the eastern bay of Shark Bay, Australia, covering an extensive shallow-water area of 1725 km2. A series of experiments were performed using Stumpf, Band-Optimized Bidirectional LSTM (BoBiLSTM), BWO_BiGRU, and BWOS_BiGRU models to infer bathymetry from EnMAP, Sentinel-2, and Landsat 9 satellite images. The results show that when using EnMAP hyperspectral images, the bathymetry inversion of BWO_BiGRU and BWOS_BiGRU models outperform Stumpf and BoBiLSTM models, with RMSEs of 0.64 m and 0.63 m, respectively. Additionally, the BWOS_BiGRU model is particularly effective in nearshore water areas (depth between 0 and 5 m) of multispectral images. In general, comparing to multispectral satellite images, using the proposed BWO_BiGRU model to infer hyperspectral satellite images can achieve better bathymetry inversion results for large-area bathymetry maps. Full article
(This article belongs to the Special Issue New Advances in Marine Remote Sensing Applications)
Show Figures

Figure 1

19 pages, 3375 KB  
Article
Enhancing Cross-Modal Camera Image and LiDAR Data Registration Using Feature-Based Matching
by Jennifer Leahy, Shabnam Jabari, Derek Lichti and Abbas Salehitangrizi
Remote Sens. 2025, 17(3), 357; https://doi.org/10.3390/rs17030357 - 22 Jan 2025
Cited by 2 | Viewed by 2494
Abstract
Registering light detection and ranging (LiDAR) data with optical camera images enhances spatial awareness in autonomous driving, robotics, and geographic information systems. The current challenges in this field involve aligning 2D-3D data acquired from sources with distinct coordinate systems, orientations, and resolutions. This [...] Read more.
Registering light detection and ranging (LiDAR) data with optical camera images enhances spatial awareness in autonomous driving, robotics, and geographic information systems. The current challenges in this field involve aligning 2D-3D data acquired from sources with distinct coordinate systems, orientations, and resolutions. This paper introduces a new pipeline for camera–LiDAR post-registration to produce colorized point clouds. Utilizing deep learning-based matching between 2D spherical projection LiDAR feature layers and camera images, we can map 3D LiDAR coordinates to image grey values. Various LiDAR feature layers, including intensity, bearing angle, depth, and different weighted combinations, are used to find correspondence with camera images utilizing state-of-the-art deep learning matching algorithms, i.e., SuperGlue and LoFTR. Registration is achieved using collinearity equations and RANSAC to remove false matches. The pipeline’s accuracy is tested using survey-grade terrestrial datasets from the TX5 scanner, as well as datasets from a custom-made, low-cost mobile mapping system (MMS) named Simultaneous Localization And Mapping Multi-sensor roBOT (SLAMM-BOT) across diverse scenes, in which both outperformed their baseline solutions. SuperGlue performed best in high-feature scenes, whereas LoFTR performed best in low-feature or sparse data scenes. The LiDAR intensity layer had the strongest matches, but combining feature layers improved matching and reduced errors. Full article
(This article belongs to the Special Issue Remote Sensing Satellites Calibration and Validation)
Show Figures

Figure 1

23 pages, 7868 KB  
Article
Target Fitting Method for Spherical Point Clouds Based on Projection Filtering and K-Means Clustered Voxelization
by Zhe Wang, Jiacheng Hu, Yushu Shi, Jinhui Cai and Lei Pi
Sensors 2024, 24(17), 5762; https://doi.org/10.3390/s24175762 - 4 Sep 2024
Cited by 2 | Viewed by 1854
Abstract
Industrial computed tomography (CT) is widely used in the measurement field owing to its advantages such as non-contact and high precision. To obtain accurate size parameters, fitting parameters can be obtained rapidly by processing volume data in the form of point clouds. However, [...] Read more.
Industrial computed tomography (CT) is widely used in the measurement field owing to its advantages such as non-contact and high precision. To obtain accurate size parameters, fitting parameters can be obtained rapidly by processing volume data in the form of point clouds. However, due to factors such as artifacts in the CT reconstruction process, many abnormal interference points exist in the point clouds obtained after segmentation. The classic least squares algorithm is easily affected by these points, resulting in significant deviation of the solution of linear equations from the normal value and poor robustness, while the random sample consensus (RANSAC) approach has insufficient fitting accuracy within a limited timeframe and the number of iterations. To address these shortcomings, we propose a spherical point cloud fitting algorithm based on projection filtering and K-Means clustering (PK-RANSAC), which strategically integrates and enhances these two methods to achieve excellent accuracy and robustness. The proposed method first uses RANSAC for rough parameter estimation, then corrects the deviation of the spherical center coordinates through two-dimensional projection, and finally obtains the spherical center point set by sampling and performing K-Means clustering. The largest cluster is weighted to obtain accurate fitting parameters. We conducted a comparative experiment using a three-dimensional ball-plate standard. The sphere center fitting deviation of PK-RANSAC was 1.91 μm, which is significantly better than RANSAC’s value of 25.41 μm. The experimental results demonstrate that PK-RANSAC has higher accuracy and stronger robustness for fitting geometric parameters. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

17 pages, 6246 KB  
Article
YPL-SLAM: A Simultaneous Localization and Mapping Algorithm for Point–line Fusion in Dynamic Environments
by Xinwu Du, Chenglin Zhang, Kaihang Gao, Jin Liu, Xiufang Yu and Shusong Wang
Sensors 2024, 24(14), 4517; https://doi.org/10.3390/s24144517 - 12 Jul 2024
Cited by 2 | Viewed by 2014
Abstract
Simultaneous Localization and Mapping (SLAM) is one of the key technologies with which to address the autonomous navigation of mobile robots, utilizing environmental features to determine a robot’s position and create a map of its surroundings. Currently, visual SLAM algorithms typically yield precise [...] Read more.
Simultaneous Localization and Mapping (SLAM) is one of the key technologies with which to address the autonomous navigation of mobile robots, utilizing environmental features to determine a robot’s position and create a map of its surroundings. Currently, visual SLAM algorithms typically yield precise and dependable outcomes in static environments, and many algorithms opt to filter out the feature points in dynamic regions. However, when there is an increase in the number of dynamic objects within the camera’s view, this approach might result in decreased accuracy or tracking failures. Therefore, this study proposes a solution called YPL-SLAM based on ORB-SLAM2. The solution adds a target recognition and region segmentation module to determine the dynamic region, potential dynamic region, and static region; determines the state of the potential dynamic region using the RANSAC method with polar geometric constraints; and removes the dynamic feature points. It then extracts the line features of the non-dynamic region and finally performs the point–line fusion optimization process using a weighted fusion strategy, considering the image dynamic score and the number of successful feature point–line matches, thus ensuring the system’s robustness and accuracy. A large number of experiments have been conducted using the publicly available TUM dataset to compare YPL-SLAM with globally leading SLAM algorithms. The results demonstrate that the new algorithm surpasses ORB-SLAM2 in terms of accuracy (with a maximum improvement of 96.1%) while also exhibiting a significantly enhanced operating speed compared to Dyna-SLAM. Full article
Show Figures

Figure 1

15 pages, 4809 KB  
Article
LiDAR Point Cloud Super-Resolution Reconstruction Based on Point Cloud Weighted Fusion Algorithm of Improved RANSAC and Reciprocal Distance
by Xiaoping Yang, Ping Ni, Zhenhua Li and Guanghui Liu
Electronics 2024, 13(13), 2521; https://doi.org/10.3390/electronics13132521 - 27 Jun 2024
Cited by 3 | Viewed by 2306
Abstract
This paper proposes a point-by-point weighted fusion algorithm based on an improved random sample consensus (RANSAC) and inverse distance weighting to address the issue of low-resolution point cloud data obtained from light detection and ranging (LiDAR) sensors and single technologies. By fusing low-resolution [...] Read more.
This paper proposes a point-by-point weighted fusion algorithm based on an improved random sample consensus (RANSAC) and inverse distance weighting to address the issue of low-resolution point cloud data obtained from light detection and ranging (LiDAR) sensors and single technologies. By fusing low-resolution point clouds with higher-resolution point clouds at the data level, the algorithm generates high-resolution point clouds, achieving the super-resolution reconstruction of lidar point clouds. This method effectively reduces noise in the higher-resolution point clouds while preserving the structure of the low-resolution point clouds, ensuring that the semantic information of the generated high-resolution point clouds remains consistent with that of the low-resolution point clouds. Specifically, the algorithm constructs a K-d tree using the low-resolution point cloud to perform a nearest neighbor search, establishing the correspondence between the low-resolution and higher-resolution point clouds. Next, the improved RANSAC algorithm is employed for point cloud alignment, and inverse distance weighting is used for point-by-point weighted fusion, ultimately yielding the high-resolution point cloud. The experimental results demonstrate that the proposed point cloud super-resolution reconstruction method outperforms other methods across various metrics. Notably, it reduces the Chamfer Distance (CD) metric by 0.49 and 0.29 and improves the Precision metric by 7.75% and 4.47%, respectively, compared to two other methods. Full article
(This article belongs to the Special Issue Digital Security and Privacy Protection: Trends and Applications)
Show Figures

Figure 1

19 pages, 8041 KB  
Article
Tree Diameter at Breast Height Extraction Based on Mobile Laser Scanning Point Cloud
by Yuhao Sheng, Qingzhan Zhao, Xuewen Wang, Yihao Liu and Xiaojun Yin
Forests 2024, 15(4), 590; https://doi.org/10.3390/f15040590 - 25 Mar 2024
Cited by 5 | Viewed by 2272
Abstract
The traditional measurement method (e.g., field survey) of tree diameter circumference often has high labor costs and is time-consuming. Mobile laser scanning (MLS) is a powerful tool for measuring forest diameter at breast height (DBH). However, the accuracy of point cloud registration seriously [...] Read more.
The traditional measurement method (e.g., field survey) of tree diameter circumference often has high labor costs and is time-consuming. Mobile laser scanning (MLS) is a powerful tool for measuring forest diameter at breast height (DBH). However, the accuracy of point cloud registration seriously affects the results of DBH measurements. To address this issue, this paper proposes a new method for extracting tree DBH parameters; it achieves the purpose of efficient and accurate extraction of tree DBH by point cloud filtering, single-tree instance segmentation, and least squares circle fitting. Firstly, the point cloud data of the plantation forest samples were obtained by a self-constructed unmanned vehicle-mounted mobile laser scanning system, and the ground point cloud was removed using cloth simulation filtering (CSF). Secondly, fast Euclidean clustering (FEC) was employed to segment the single-tree instances, and the point cloud slices at breast height were extracted based on the point sets of single-tree instances, which were then fitted in two dimensions using the horizontally projected point cloud slices. Finally, a circle fitting algorithm based on intensity weighted least squares (IWLS) was proposed to solve the optimal circle model based on 2D point cloud slices, to minimize the impact of misaligned point clouds on DBH measures. The results showed that the mean absolute error (MAE) of the IWLS method was 2.41 cm, the root mean square error (RMSE) was 2.81 cm, and the relative accuracy was 89.77%. Compared with the random sample consensus (RANSAC) algorithm and ordinary least squares (OLS), the MAE was reduced by 36.45% and 9.14%, the RMSE was reduced by 40.90% and 12.26%, and the relative accuracy was improved by 8.99% and 1.63%, respectively. The R2 value of the fitted curve of the IWLS method was the closest to 1, with the highest goodness of fit and a significant linear correlation with the true value. The proposed intensity weighted least squares circle-fitting DBH extraction method can effectively improve the DBH extraction accuracy of mobile laser scanning point cloud data and reduce the influence of poorly aligned point clouds on DBH fitting. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

21 pages, 6143 KB  
Article
Smart Shelf System for Customer Behavior Tracking in Supermarkets
by John Anthony C. Jose, Christopher John B. Bertumen, Marianne Therese C. Roque, Allan Emmanuel B. Umali, Jillian Clara T. Villanueva, Richard Josiah TanAi, Edwin Sybingco, Jayne San Juan and Erwin Carlo Gonzales
Sensors 2024, 24(2), 367; https://doi.org/10.3390/s24020367 - 8 Jan 2024
Cited by 7 | Viewed by 7888
Abstract
Transactional data from point-of-sales systems may not consider customer behavior before purchasing decisions are finalized. A smart shelf system would be able to provide additional data for retail analytics. In previous works, the conventional approach has involved customers standing directly in front of [...] Read more.
Transactional data from point-of-sales systems may not consider customer behavior before purchasing decisions are finalized. A smart shelf system would be able to provide additional data for retail analytics. In previous works, the conventional approach has involved customers standing directly in front of products on a shelf. Data from instances where customers deviated from this convention, referred to as “cross-location”, were typically omitted. However, recognizing instances of cross-location is crucial when contextualizing multi-person and multi-product tracking for real-world scenarios. The monitoring of product association with customer keypoints through RANSAC modeling and particle filtering (PACK-RMPF) is a system that addresses cross-location, consisting of twelve load cell pairs for product tracking and a single camera for customer tracking. In this study, the time series vision data underwent further processing with R-CNN and StrongSORT. An NTP server enabled the synchronization of timestamps between the weight and vision subsystems. Multiple particle filtering predicted the trajectory of each customer’s centroid and wrist keypoints relative to the location of each product. RANSAC modeling was implemented on the particles to associate a customer with each event. Comparing system-generated customer–product interaction history with the shopping lists given to each participant, the system had a general average recall rate of 76.33% and 79% for cross-location instances over five runs. Full article
(This article belongs to the Special Issue Deep Learning for Information Fusion and Pattern Recognition)
Show Figures

Figure 1

15 pages, 4272 KB  
Article
Precise Cadastral Survey of Rural Buildings Based on Wall Segment Topology Analysis from Dense Point Clouds
by Bo Xu, Zhaochen Han and Min Chen
Appl. Sci. 2023, 13(18), 10197; https://doi.org/10.3390/app131810197 - 11 Sep 2023
Cited by 1 | Viewed by 1607
Abstract
The renewal and updating of the cadastre of real estate is a long and tedious task for land administration, especially for rural buildings that lack unified design and planning. In order to retain the required accuracy of all points in the register, huge [...] Read more.
The renewal and updating of the cadastre of real estate is a long and tedious task for land administration, especially for rural buildings that lack unified design and planning. In order to retain the required accuracy of all points in the register, huge extensive manual editing is often required. In this work, a precise cadastral survey approach is proposed using Unmanned Aerial Vehicle (UAV) imagery-based stereo point clouds. To ensure the accuracy and uniqueness of building outer walls, the non-maximum suppression of wall points that can separate noise and avoid repeated extraction is proposed. Meanwhile, the multiple cue weighted RANSAC, considering both point-to-line distance and normal consistency, is proposed to reduce the influence of building attachments and avoid spurious edges. For a better description of wall topology, the wall line segment topology graph (WLTG), which can guide the connection of adjacent lines and support the searching of closed boundaries through the minimum graph loop analysis, is also built. Experimental results show that the proposed method can effectively detect the building vector contours with high precision and correct topology, and the detection completeness and correctness of the edge corners can reach 84.9% and 93.2% when the mean square error is below 10 cm. Full article
(This article belongs to the Special Issue Advances in 3D Sensing Techniques and Its Applications)
Show Figures

Figure 1

24 pages, 14256 KB  
Article
A Novel Framework for Stratified-Coupled BLS Tree Trunk Detection and DBH Estimation in Forests (BSTDF) Using Deep Learning and Optimization Adaptive Algorithm
by Huacong Zhang, Huaiqing Zhang, Keqin Xu, Yueqiao Li, Linlong Wang, Ren Liu, Hanqing Qiu and Longhua Yu
Remote Sens. 2023, 15(14), 3480; https://doi.org/10.3390/rs15143480 - 10 Jul 2023
Cited by 6 | Viewed by 2384
Abstract
Diameter at breast height (DBH) is a critical metric for quantifying forest resources, and obtaining accurate, efficient measurements of DBH is crucial for effective forest management and inventory. A backpack LiDAR system (BLS) can provide high-resolution representations of forest trunk structures, making it [...] Read more.
Diameter at breast height (DBH) is a critical metric for quantifying forest resources, and obtaining accurate, efficient measurements of DBH is crucial for effective forest management and inventory. A backpack LiDAR system (BLS) can provide high-resolution representations of forest trunk structures, making it a promising tool for DBH measurement. However, in practical applications, deep learning-based tree trunk detection and DBH estimation using BLS still faces numerous challenges, such as complex forest BLS data, low proportions of target point clouds leading to imbalanced class segmentation accuracy in deep learning models, and low fitting accuracy and robustness of trunk point cloud DBH methods. To address these issues, this study proposed a novel framework for BLS stratified-coupled tree trunk detection and DBH estimation in forests (BSTDF). This framework employed a stratified coupling approach to create a tree trunk detection deep learning dataset, introduced a weighted cross-entropy focal-loss function module (WCF) and a cosine annealing cyclic learning strategy (CACL) to enhance the WCF-CACL-RandLA-Net model for extracting trunk point clouds, and applied a (least squares adaptive random sample consensus) LSA-RANSAC cylindrical fitting method for DBH estimation. The findings reveal that the dataset based on the stratified-coupled approach effectively reduces the amount of data for deep learning tree trunk detection. To compare the accuracy of BSTDF, synchronous control experiments were conducted using the RandLA-Net model and the RANSAC algorithm. To benchmark the accuracy of BSTDF, we conducted synchronized control experiments utilizing a variety of mainstream tree trunk detection models and DBH fitting methodologies. Especially when juxtaposed with the RandLA-Net model, the WCF-CACL-RandLA-Net model employed by BSTDF demonstrated a 6% increase in trunk segmentation accuracy and a 3% improvement in the F1 score with the same training sample volume. This effectively mitigated class imbalance issues encountered during the segmentation process. Simultaneously, when compared to RANSAC, the LSA-RANCAC method adopted by BSTDF reduced the RMSE by 1.08 cm and boosted R2 by 14%, effectively tackling the inadequacies of RANSAC’s filling. The optimal acquisition distance for BLS data is 20 m, at which BSTDF’s overall tree trunk detection rate (ER) reaches 90.03%, with DBH estimation precision indicating an RMSE of 4.41 cm and R2 of 0.87. This study demonstrated the effectiveness of BSTDF in forest DBH estimation, offering a more efficient solution for forest resource monitoring and quantification, and possessing immense potential to replace field forest measurements. Full article
Show Figures

Figure 1

26 pages, 25984 KB  
Article
A Framework for Using UAVs to Detect Pavement Damage Based on Optimal Path Planning and Image Splicing
by Runmin Zhao, Yi Huang, Haoyuan Luo, Xiaoming Huang and Yangzezhi Zheng
Sustainability 2023, 15(3), 2182; https://doi.org/10.3390/su15032182 - 24 Jan 2023
Cited by 8 | Viewed by 2783
Abstract
In order to investigate the use of unmanned aerial vehicles (UAVs) for future application in road damage detection and to provide a theoretical and technical basis for UAV road damage detection, this paper determined the recommended flight and camera parameters based on the [...] Read more.
In order to investigate the use of unmanned aerial vehicles (UAVs) for future application in road damage detection and to provide a theoretical and technical basis for UAV road damage detection, this paper determined the recommended flight and camera parameters based on the needs of continuous road image capture and pavement disease recognition. Furthermore, to realize automatic route planning and control, continuous photography control, and image stitching and smoothing tasks, a UAV control framework for road damage detection, based on the Dijkstra algorithm, the speeded-up robust features (SURF) algorithm, the random sampling consistency (RANSAC) algorithm, and the gradual in and out weight fusion method, was also proposed in this paper. With the Canny operator, it was verified that the road stitched long image obtained by the UAV control method proposed in this paper is applicable to machine learning pavement disease identification. Full article
Show Figures

Figure 1

14 pages, 5546 KB  
Article
An Efficient Algorithm for Infrared Earth Sensor with a Large Field of View
by Bendong Wang, Hao Wang and Zhonghe Jin
Sensors 2022, 22(23), 9409; https://doi.org/10.3390/s22239409 - 2 Dec 2022
Cited by 4 | Viewed by 2765
Abstract
Infrared Earth sensors with large-field-of-view (FOV) cameras are widely used in low-Earth-orbit satellites. To improve the accuracy and speed of Earth sensors, an algorithm based on modified random sample consensus (RANSAC) and weighted total least squares (WTLS) is proposed. Firstly, the modified RANSAC [...] Read more.
Infrared Earth sensors with large-field-of-view (FOV) cameras are widely used in low-Earth-orbit satellites. To improve the accuracy and speed of Earth sensors, an algorithm based on modified random sample consensus (RANSAC) and weighted total least squares (WTLS) is proposed. Firstly, the modified RANSAC with a pre-verification step was used to remove the noisy points efficiently. Then, the Earth’s oblateness was taken into consideration and the Earth’s horizon was projected onto a unit sphere as a three-dimensional (3D) curve. Finally, the TLS and WTLS were used to fit the projection of the Earth horizon. With the help of TLS and WTLS, the accuracy of the Earth sensor was greatly improved. Simulated images and on-orbit infrared images obtained via the satellite Tianping-2B were used to assess the performance of the algorithm. The experimental results demonstrate that the method outperforms RANSAC, M-estimator sample consensus (MLESAC), and Hough transformation in terms of speed. The accuracy of the algorithm for nadir estimation is approximately 0.04° (root-mean-square error) when Earth is fully visible and 0.16° when the off-nadir angle is 120°, which is a significant improvement upon other nadir estimation algorithms Full article
(This article belongs to the Topic Micro/Nano Satellite Technology, Systems and Components)
Show Figures

Figure 1

Back to TopTop