Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (106)

Search Parameters:
Keywords = voxel filter

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 6708 KB  
Article
Feasibility Domain Construction and Characterization Method for Intelligent Underground Mining Equipment Integrating ORB-SLAM3 and Depth Vision
by Siya Sun, Xiaotong Han, Hongwei Ma, Haining Yuan, Sirui Mao, Chuanwei Wang, Kexiang Ma, Yifeng Guo and Hao Su
Sensors 2026, 26(3), 966; https://doi.org/10.3390/s26030966 - 2 Feb 2026
Viewed by 85
Abstract
To address the limited environmental perception capability and the difficulty of achieving consistent and efficient representation of the workspace feasible domain caused by high dust concentration, uneven illumination, and enclosed spaces in underground coal mines, this paper proposes a digital spatial construction and [...] Read more.
To address the limited environmental perception capability and the difficulty of achieving consistent and efficient representation of the workspace feasible domain caused by high dust concentration, uneven illumination, and enclosed spaces in underground coal mines, this paper proposes a digital spatial construction and representation method for underground environments by integrating RGB-D depth vision with ORB-SLAM3. First, a ChArUco calibration board with embedded ArUco markers is adopted to perform high-precision calibration of the RGB-D camera, improving the reliability of geometric parameters under weak-texture and non-uniform lighting conditions. On this basis, a “dense–sparse cooperative” OAK-DenseMapper Pro module is further developed; the module improves point-cloud generation using a mathematical projection model, and combines enhanced stereo matching with multi-stage depth filtering to achieve high-quality dense point-cloud reconstruction from RGB-D observations. The dense point cloud is then converted into a probabilistic octree occupancy map, where voxel-wise incremental updates are performed for observed space while unknown regions are retained, enabling a memory-efficient and scalable 3D feasible-space representation. Experiments are conducted in multiple representative coal-mine tunnel scenarios; compared with the original ORB-SLAM3, the number of points in dense mapping increases by approximately 38% on average; in trajectory evaluation on the TUM dataset, the root mean square error, mean error, and median error of the absolute pose error are reduced by 7.7%, 7.1%, and 10%, respectively; after converting the dense point cloud to an octree, the map memory footprint is only about 0.5% of the original point cloud, with a single conversion time of approximately 0.75 s. The experimental results demonstrate that, while ensuring accuracy, the proposed method achieves real-time, efficient, and consistent representation of the 3D feasible domain in complex underground environments, providing a reliable digital spatial foundation for path planning, safe obstacle avoidance, and autonomous operation. Full article
Show Figures

Figure 1

18 pages, 7305 KB  
Article
SERail-SLAM: Semantic-Enhanced Railway LiDAR SLAM
by Weiwei Song, Shiqi Zheng, Xinye Dai, Xiao Wang, Yusheng Wang, Zihao Wang, Shujie Zhou, Wenlei Liu and Yidong Lou
Machines 2026, 14(1), 72; https://doi.org/10.3390/machines14010072 - 7 Jan 2026
Viewed by 367
Abstract
Reliable state estimation in railway environments presents significant challenges due to geometric degeneracy resulting from repetitive structural layouts and point cloud sparsity caused by high-speed motion. Conventional LiDAR-based SLAM systems frequently suffer from longitudinal drift and mapping artifacts when operating in such feature-scarce [...] Read more.
Reliable state estimation in railway environments presents significant challenges due to geometric degeneracy resulting from repetitive structural layouts and point cloud sparsity caused by high-speed motion. Conventional LiDAR-based SLAM systems frequently suffer from longitudinal drift and mapping artifacts when operating in such feature-scarce and dynamically complex scenarios. To address these limitations, this paper proposes SERail-SLAM, a robust semantic-enhanced multi-sensor fusion framework that tightly couples LiDAR odometry, inertial pre-integration, and GNSS constraints. Unlike traditional approaches that rely on rigid voxel grids or binary semantic masking, we introduce a Semantic-Enhanced Adaptive Voxel Map. By leveraging eigen-decomposition of local point distributions, this mapping strategy dynamically preserves fine-grained stable structures while compressing redundant planar surfaces, thereby enhancing spatial descriptiveness. Furthermore, to mitigate the impact of environmental noise and segmentation uncertainty, a confidence-aware filtering mechanism is developed. This method utilizes raw segmentation probabilities to adaptively weight input measurements, effectively distinguishing reliable landmarks from clutter. Finally, a category-weighted joint optimization scheme is implemented, where feature associations are constrained by semantic stability priors, ensuring globally consistent localization. Extensive experiments in real-world railway datasets demonstrate that the proposed system achieves superior accuracy and robustness compared to state-of-the-art geometric and semantic SLAM methods. Full article
(This article belongs to the Special Issue Dynamic Analysis and Condition Monitoring of High-Speed Trains)
Show Figures

Figure 1

30 pages, 6797 KB  
Article
Voxel-Based Leaf Area Estimation in Trellis-Grown Grapevines: A Destructive Validation and Comparison with Optical LAI Methods
by Poching Teng, Hiroyoshi Sugiura, Tomoki Date, Unseok Lee, Takeshi Yoshida, Tomohiko Ota and Junichi Nakagawa
Remote Sens. 2026, 18(2), 198; https://doi.org/10.3390/rs18020198 - 7 Jan 2026
Viewed by 322
Abstract
This study develops a voxel-based leaf area estimation framework and validates it using a three-year multi-temporal dataset (2022–2024) of pergola-trained grapevines. The workflow integrates 2D image analysis, ExGR-based leaf segmentation, and 3D reconstruction using Structure-from-Motion (SfM). Multi-angle canopy images were collected repeatedly during [...] Read more.
This study develops a voxel-based leaf area estimation framework and validates it using a three-year multi-temporal dataset (2022–2024) of pergola-trained grapevines. The workflow integrates 2D image analysis, ExGR-based leaf segmentation, and 3D reconstruction using Structure-from-Motion (SfM). Multi-angle canopy images were collected repeatedly during the growing seasons, and destructive leaf sampling was conducted to quantify true leaf area across multiple vines and years. After removing non-leaf structures with ExGR filtering, the point clouds were voxelized at a 1 cm3 resolution to derive structural occupancy metrics. Voxel-based leaf area showed strong within-vine correlations with destructively measured values (R2 = 0.77–0.95), while cross-vine variability was influenced by canopy complexity, illumination, and point-cloud density. In contrast, optical LAI tools (DHP and LAI–2000) exhibited negligible correspondence with true leaf area due to multilayer occlusion and lateral light contamination typical of pergola systems. This expanded, multi-year analysis demonstrates that voxel occupancy provides a robust and scalable indicator of canopy structural density and leaf area, offering a practical foundation for remote-sensing-based phenotyping, yield estimation, and data-driven management in perennial fruit crops. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

10 pages, 496 KB  
Article
Adaptive 3D Augmentation in StyleGAN2-ADA for High-Fidelity Lung Nodule Synthesis from Limited CT Volumes
by Oleksandr Fedoruk, Konrad Klimaszewski and Michał Kruk
Sensors 2025, 25(24), 7404; https://doi.org/10.3390/s25247404 - 5 Dec 2025
Viewed by 729
Abstract
Generative adversarial networks (GANs) typically require large datasets for effective training, which poses challenges for volumetric medical imaging tasks where data are scarce. This study addresses this limitation by extending adaptive discriminator augmentation (ADA) for three-dimensional (3D) StyleGAN2 to improve generative performance on [...] Read more.
Generative adversarial networks (GANs) typically require large datasets for effective training, which poses challenges for volumetric medical imaging tasks where data are scarce. This study addresses this limitation by extending adaptive discriminator augmentation (ADA) for three-dimensional (3D) StyleGAN2 to improve generative performance on limited volumetric data. The proposed 3D StyleGAN2-ADA redefines all 2D operations for volumetric processing and incorporates the full set of original augmentation techniques. Experiments are conducted on the NoduleMNIST3D dataset of lung CT scans containing 590 voxel-based samples across two classes. Two augmentation pipelines are evaluated—one using color-based transformations and another employing a comprehensive set of 3D augmentations including geometric, filtering, and corruption augmentations. Performance is compared against the same network and dataset without any augmentations at all by assessing generation quality with Kernel Inception Distance (KID) and 3D Structural Similarity Index Measure (SSIM). Results show that volumetric ADA substantially improves training stability and reduces the risk of a mode collapse, even under severe data constraints. A strong augmentation strategy improves the realism of generated 3D samples and better preserves anatomical structures relative to those without data augmentation. These findings demonstrate that adaptive 3D augmentations effectively enable high-quality synthetic medical image generation from extremely limited volumetric datasets. The source code and the weights of the networks are available in the GitHub repository. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

21 pages, 17034 KB  
Article
From CT Imaging to 3D Representations: Digital Modelling of Fibre-Reinforced Adhesives with Image-Based FEM
by Abdul Wasay Khan, Kaixin Xu, Nikolas Manousides and Claudio Balzani
Adhesives 2025, 1(4), 14; https://doi.org/10.3390/adhesives1040014 - 3 Dec 2025
Viewed by 483
Abstract
Short fibre-reinforced adhesives (SFRAs) are increasingly used in wind turbine blades to enhance stiffness and fatigue resistance, yet their heterogeneous microstructure poses significant challenges for predictive modelling. This study presents a fully automated digital workflow that integrates micro-computed tomography (µCT), image processing, and [...] Read more.
Short fibre-reinforced adhesives (SFRAs) are increasingly used in wind turbine blades to enhance stiffness and fatigue resistance, yet their heterogeneous microstructure poses significant challenges for predictive modelling. This study presents a fully automated digital workflow that integrates micro-computed tomography (µCT), image processing, and finite element modelling (FEM) to investigate the mechanical response of SFRAs. Our aim is also to establish a computational foundation for data-driven modelling and future AI surrogates of adhesive joints in wind turbine blades. High-resolution µCT scans were denoised and segmented using a hybrid non-local means and Gaussian filtering pipeline combined with Otsu thresholding and convex hull separation, enabling robust fibre identification and orientation analysis. Two complementary modelling strategies were employed: (i) 2D slice-based FEM models to rapidly assess microstructural effects on stress localisation and (ii) 3D voxel-based FEM models to capture the full anisotropic fibre network. Linear elastic simulations were conducted under inhomogeneous uniaxial extension and torsional loading, revealing interfacial stress hotspots at fibre tips and narrow ligaments. Fibre clustering and alignment strongly influenced stress partitioning between fibres and the matrix, while isotropic regions exhibited diffuse, matrix-dominated load transfer. The results demonstrate that image-based FEM provides a powerful route for structure–property modelling of SFRAs and establish a scalable foundation for digital twin development, reliability assessment, and integration with physics-informed surrogate modelling frameworks. Full article
Show Figures

Figure 1

17 pages, 3294 KB  
Article
Detecting 3D Anomalies in Soil Water from Saline-Alkali Land of Yellow River Delta Using Sampling Data
by Zhoushun Han, Xin Fu, Haoran Zhang, Yang Li, Lehang Tang, Hengcai Zhang and Zhenghe Xu
Hydrology 2025, 12(12), 318; https://doi.org/10.3390/hydrology12120318 - 1 Dec 2025
Viewed by 431
Abstract
Understanding soil water in the saline-alkali lands is crucial for sustainable agriculture and ecological restoration. Existing studies have largely focused on macroscopic distribution and associated interpolation techniques, which complicates the precise identification of localized anomalous regions. To address this limitation, this study proposes [...] Read more.
Understanding soil water in the saline-alkali lands is crucial for sustainable agriculture and ecological restoration. Existing studies have largely focused on macroscopic distribution and associated interpolation techniques, which complicates the precise identification of localized anomalous regions. To address this limitation, this study proposes a novel three-dimensional detection method for localized soil water anomalies (3D-SWLA). Utilizing soil water sampling data, a comprehensive three-dimensional soil water cube is constructed through 3D Empirical Bayesian Kriging (3D EBK). We introduce the Soil Water Local Anomaly Index (SWLAI) and apply a second-order difference method to effectively identify and filter anomalous voxels. Then, the 3D Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm is utilized to cluster Soil Water Anomalous Voxels (SWAVs), thereby delineating three-dimensional Local Anomalous Soil Water Areas (LASWAs) with precision and robustness. A series of experiments were conducted in Kenli to validate the proposed methodology. The results reveal that 3D-SWLA successfully identified a total of 8 Local Anomalous Soil Water Areas (LASWAs), four of which—classified as large-scale anomalies (area > 1.0 km2)—are predominantly concentrated in the northeastern coastal zone and the southern salt fields. The largest among them, LASWA-1, spans 1.8 km2 with a vertical depth ranging from 0 to 35 cm and an average soil water content of 0.36. Another significant anomaly, LASWA-8, covers 1.5 km2, extends to a depth of 0–60 cm, and exhibits a higher average water content of 0.42, reflecting distinct hydrological dynamics in these regions. Additionally, 4 smaller LASWAs (area < 1.0 km2) are spatially distributed along the northeastern irrigation channels, indicating localized moisture accumulation likely influenced by agricultural water management. Full article
Show Figures

Figure 1

20 pages, 3879 KB  
Article
Optical Camera-Based Integrated Sensing and Communication for V2X Applications: Model and Optimization
by Ke Dong, Wenying Cao and Mingjun Wang
Sensors 2025, 25(22), 7061; https://doi.org/10.3390/s25227061 - 19 Nov 2025
Viewed by 561
Abstract
An optical camera-based integrated sensing and communication (OC-ISAC) system model is proposed to address the intrinsic requirements of vehicular-to-everything (V2X) applications in complex outdoor environments. The model enables the coexistence and potential mutual enhancement of environmental sensing and data transmission within the visible [...] Read more.
An optical camera-based integrated sensing and communication (OC-ISAC) system model is proposed to address the intrinsic requirements of vehicular-to-everything (V2X) applications in complex outdoor environments. The model enables the coexistence and potential mutual enhancement of environmental sensing and data transmission within the visible light spectrum. It characterizes the OC-ISAC channel by modeling how light, either actively emitted for communication or passively reflected from the environment, originating from any voxel in three-dimensional space, propagates to the image sensor and contributes to the observed pixel values. This framework is leveraged to systematically analyze the impact of camera imaging parameters, particularly exposure time, on the joint performance of sensing and communication. To address the resulting trade-off, we develop an analytically tractable suboptimal algorithm that determines a near-optimal exposure time in closed form. Compared with the exhaustive numerical search for the global optimum, the suboptimal algorithm reduces computational complexity from O(N) to O(1), while introducing only a modest average normalized deviation of 5.71%. Both theoretical analysis and experimental results confirm that, in high-speed communication or mobile sensing scenarios, careful selection of exposure time and explicit compensation for the camera’s low-pass filtering effect in receiver design are essential to achieving optimal dual-functional performance. Full article
Show Figures

Figure 1

16 pages, 3174 KB  
Article
Online Mapping from Weight Matching Odometry and Highly Dynamic Point Cloud Filtering via Pseudo-Occupancy Grid
by Xin Zhao, Xingyu Cao, Meng Ding, Da Jiang and Chao Wei
Sensors 2025, 25(22), 6872; https://doi.org/10.3390/s25226872 - 10 Nov 2025
Viewed by 2435
Abstract
Efficient locomotion in autonomous driving and robotics requires clearer visualization and more precise map. This paper presents a high accuracy online mapping including weight matching LiDAR-IMU-GNSS odometry and an object-level highly dynamic point cloud filtering method based on a pseudo-occupancy grid. The odometry [...] Read more.
Efficient locomotion in autonomous driving and robotics requires clearer visualization and more precise map. This paper presents a high accuracy online mapping including weight matching LiDAR-IMU-GNSS odometry and an object-level highly dynamic point cloud filtering method based on a pseudo-occupancy grid. The odometry integrates IMU pre-integration, ground point segmentation through progressive morphological filtering (PMF), motion compensation, and weight feature point matching. Weight feature point matching enhances alignment accuracy by combining geometric and reflectance intensity similarities. By computing the pseudo-occupancy ratio between the current frame and prior local submaps, the grid probability values are updated to identify the distribution of dynamic grids. Object-level point cloud cluster segmentation is obtained using the curved voxel clustering method, eventually leading to filtering out the object-level highly dynamic point clouds during the online mapping process. Compared to the LIO-SAM and FAST-LIO2 frameworks, the proposed odometry demonstrates superior accuracy in the KITTI, UrbanLoco, and Newer College (NCD) datasets. Meantime, the proposed highly dynamic point cloud filtering algorithm exhibits better detection precision than the performance of Removert and ERASOR. Furthermore, the high-accuracy online mapping is built from a real-time dataset with the comprehensive filtering of driving vehicles, cyclists, and pedestrians. This research contributes to the field of high-accuracy online mapping, especially in filtering highly dynamic objects in an advanced way. Full article
(This article belongs to the Special Issue Application of LiDAR Remote Sensing and Mapping)
Show Figures

Figure 1

22 pages, 6748 KB  
Article
Automated 3D Reconstruction of Interior Structures from Unstructured Point Clouds
by Youssef Hany, Wael Ahmed, Adel Elshazly, Ahmad M. Senousi and Walid Darwish
ISPRS Int. J. Geo-Inf. 2025, 14(11), 428; https://doi.org/10.3390/ijgi14110428 - 31 Oct 2025
Viewed by 1499
Abstract
The automatic reconstruction of existing buildings has gained momentum through the integration of Building Information Modeling (BIM) into architecture, engineering, and construction (AEC) workflows. This study presents a hybrid methodology that combines deep learning with surface-based techniques to automate the generation of 3D [...] Read more.
The automatic reconstruction of existing buildings has gained momentum through the integration of Building Information Modeling (BIM) into architecture, engineering, and construction (AEC) workflows. This study presents a hybrid methodology that combines deep learning with surface-based techniques to automate the generation of 3D models and 2D floor plans from unstructured indoor point clouds. The approach begins with point cloud preprocessing using voxel-based downsampling and robust statistical outlier removal. Room partitions are extracted via DBSCAN applied in the 2D space, followed by structural segmentation using the RandLA-Net deep learning model to classify key building components such as walls, floors, ceilings, columns, doors, and windows. To enhance segmentation fidelity, a density-based filtering technique is employed, and RANSAC is utilized to detect and fit planar primitives representing major surfaces. Wall-surface openings such as doors and windows are identified through local histogram analysis and interpolation in wall-aligned coordinate systems. The method supports complex indoor environments including Manhattan and non-Manhattan layouts, variable ceiling heights, and cluttered scenes with occlusions. The approach was validated using six datasets with varying architectural characteristics, and evaluated using completeness, correctness, and accuracy metrics. Results show a minimum completeness of 86.6%, correctness of 84.8%, and a maximum geometric error of 9.6 cm, demonstrating the robustness and generalizability of the proposed pipeline for automated as-built BIM reconstruction. Full article
Show Figures

Figure 1

22 pages, 3921 KB  
Article
Tightly Coupled LiDAR-Inertial Odometry for Autonomous Driving via Self-Adaptive Filtering and Factor Graph Optimization
by Weiwei Lyu, Haoting Li, Shuanggen Jin, Haocai Huang, Xiaojuan Tian, Yunlong Zhang, Zheyuan Du and Jinling Wang
Machines 2025, 13(11), 977; https://doi.org/10.3390/machines13110977 - 23 Oct 2025
Viewed by 1453
Abstract
Simultaneous Localization and Mapping (SLAM) has become a critical tool for fully autonomous driving. However, current methods suffer from inefficient data utilization and degraded navigation performance in complex and unknown environments. In this paper, an accurate and tightly coupled method of LiDAR-inertial odometry [...] Read more.
Simultaneous Localization and Mapping (SLAM) has become a critical tool for fully autonomous driving. However, current methods suffer from inefficient data utilization and degraded navigation performance in complex and unknown environments. In this paper, an accurate and tightly coupled method of LiDAR-inertial odometry is proposed. First, a self-adaptive voxel grid filter is developed to dynamically downsample the original point clouds based on environmental feature richness, aiming to balance navigation accuracy and real-time performance. Second, keyframe factors are selected based on thresholds of translation distance, rotation angle, and time interval and then introduced into the factor graph to improve global consistency. Additionally, high-quality Global Navigation Satellite System (GNSS) factors are selected and incorporated into the factor graph through linear interpolation, thereby improving the navigation accuracy in complex and unknown environments. The proposed method is evaluated using KITTI dataset over various scales and environments. Results show that the proposed method has demonstrated very promising better results when compared with the other methods, such as ALOAM, LIO-SAM, and SC-LeGO-LOAM. Especially in urban scenes, the trajectory accuracy of the proposed method has been improved by 33.13%, 57.56%, and 58.4%, respectively, illustrating excellent navigation and positioning capabilities. Full article
(This article belongs to the Section Vehicle Engineering)
Show Figures

Figure 1

20 pages, 11855 KB  
Article
High-Precision Extrinsic Calibration for Multi-LiDAR Systems with Narrow FoV via Synergistic Planar and Circular Features
by Xinbao Sun, Zhi Zhang, Shuo Xu and Jinyue Liu
Sensors 2025, 25(20), 6432; https://doi.org/10.3390/s25206432 - 17 Oct 2025
Viewed by 1114
Abstract
Precise extrinsic calibration is a fundamental prerequisite for data fusion in multi-LiDAR systems. However, conventional methods are often encumbered by dependencies on initial estimates, auxiliary sensors, or manual feature selection, which renders them complex, time-consuming, and limited in adaptability across diverse environments. To [...] Read more.
Precise extrinsic calibration is a fundamental prerequisite for data fusion in multi-LiDAR systems. However, conventional methods are often encumbered by dependencies on initial estimates, auxiliary sensors, or manual feature selection, which renders them complex, time-consuming, and limited in adaptability across diverse environments. To address these limitations, this paper proposes a novel, high-precision extrinsic calibration method for multi-LiDAR systems with a narrow Field of View (FoV), achieved through the synergistic use of circular and planar features. Our approach commences with the automatic segmentation of the calibration target’s point cloud using an improved VoxelNet. Subsequently, a denoising step, combining RANSAC and a Gaussian Mean Intensity Filter (GMIF), is applied to ensure high-quality feature extraction. From the refined point cloud, planar and circular features are robustly extracted via Principal Component Analysis (PCA) and least-squares fitting, respectively. Finally, the extrinsic parameters are optimized by minimizing a nonlinear objective function formulated with joint constraints from both geometric features. Simulation results validate the high precision of our method, with rotational and translational errors contained within 0.08° and 0.8 cm. Furthermore, real-world experiments confirm its effectiveness and superiority, outperforming conventional point-cloud registration techniques. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

19 pages, 20163 KB  
Article
Voxel-Based Roadway Terrain Risk Modeling and Traversability Assessment in Underground Coal Mines
by Wanzi Yan, Zhencai Zhu, Yidong Zhang, Hao Lu, Minti Xue, Yu Tang and Shaobo Sun
Machines 2025, 13(9), 868; https://doi.org/10.3390/machines13090868 - 18 Sep 2025
Viewed by 639
Abstract
Effective roadway environment sensing is critical for intelligent underground vehicle navigation. Dust pollution and complex terrain in underground roadways present key challenges for quantifying passability risks: (1) Over-filtering of dust noise in lidar point clouds can inadvertently remove valuable information. (2) The enclosed [...] Read more.
Effective roadway environment sensing is critical for intelligent underground vehicle navigation. Dust pollution and complex terrain in underground roadways present key challenges for quantifying passability risks: (1) Over-filtering of dust noise in lidar point clouds can inadvertently remove valuable information. (2) The enclosed and chaotic nature of underground roadways prevents planar information from fully representing spatial constraints. To address these challenges, this paper proposes a method for constructing terrain risk voxels and assessing navigability in coal mine tunnels. First, an improved particle filter combined with image features performs two-stage dust filtering. Second, D-S theory is applied to fuse and evaluate three-dimensional tunnel risks, constructing 3D terrain risk voxels. Finally, navigable spaces are identified and their characteristics quantified to assess passage risks. Experiments show that the proposed dust filtering algorithm achieves 96.7% average accuracy in primary underground areas. The D-S theory effectively constructs roadway terrain risk voxels, enabling reliable quantitative assessment of roadway passability risks. Full article
(This article belongs to the Section Machine Design and Theory)
Show Figures

Figure 1

30 pages, 6195 KB  
Article
Digital Inspection Technology for Sheet Metal Parts Using 3D Point Clouds
by Jian Guo, Dingzhong Tan, Shizhe Guo, Zheng Chen and Rang Liu
Sensors 2025, 25(15), 4827; https://doi.org/10.3390/s25154827 - 6 Aug 2025
Viewed by 1150
Abstract
To solve the low efficiency of traditional sheet metal measurement, this paper proposes a digital inspection method for sheet metal parts based on 3D point clouds. The 3D point cloud data of sheet metal parts are collected using a 3D laser scanner, and [...] Read more.
To solve the low efficiency of traditional sheet metal measurement, this paper proposes a digital inspection method for sheet metal parts based on 3D point clouds. The 3D point cloud data of sheet metal parts are collected using a 3D laser scanner, and the topological relationship is established by using a K-dimensional tree (KD tree). The pass-through filtering method is adopted to denoise the point cloud data. To preserve the fine features of the parts, an improved voxel grid method is proposed for the downsampling of the point cloud data. Feature points are extracted via the intrinsic shape signatures (ISS) algorithm and described using the fast point feature histograms (FPFH) algorithm. After rough registration with the sample consensus initial alignment (SAC-IA) algorithm, an initial position is provided for fine registration. The improved iterative closest point (ICP) algorithm, used for fine registration, can enhance the registration accuracy and efficiency. The greedy projection triangulation algorithm optimized by moving least squares (MLS) smoothing ensures surface smoothness and geometric accuracy. The reconstructed 3D model is projected onto a 2D plane, and the actual dimensions of the parts are calculated based on the pixel values of the sheet metal parts and the conversion scale. Experimental results show that the measurement error of this inspection system for three sheet metal workpieces ranges from 0.1416 mm to 0.2684 mm, meeting the accuracy requirement of ±0.3 mm. This method provides a reliable digital inspection solution for sheet metal parts. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

18 pages, 12540 KB  
Article
SS-LIO: Robust Tightly Coupled Solid-State LiDAR–Inertial Odometry for Indoor Degraded Environments
by Yongle Zou, Peipei Meng, Jianqiang Xiong and Xinglin Wan
Electronics 2025, 14(15), 2951; https://doi.org/10.3390/electronics14152951 - 24 Jul 2025
Viewed by 1575
Abstract
Solid-state LiDAR systems are widely recognized for their high reliability, low cost, and lightweight design, but they encounter significant challenges in SLAM tasks due to their limited field of view and uneven horizontal scanning patterns, especially in indoor environments with geometric constraints. To [...] Read more.
Solid-state LiDAR systems are widely recognized for their high reliability, low cost, and lightweight design, but they encounter significant challenges in SLAM tasks due to their limited field of view and uneven horizontal scanning patterns, especially in indoor environments with geometric constraints. To address these challenges, this paper proposes SS-LIO, a precise, robust, and real-time LiDAR–Inertial odometry solution designed for solid-state LiDAR systems. SS-LIO uses uncertainty propagation in LiDAR point-cloud modeling and a tightly coupled iterative extended Kalman filter to fuse LiDAR feature points with IMU data for reliable localization. It also employs voxels to encapsulate planar features for accurate map construction. Experimental results from open-source datasets and self-collected data demonstrate that SS-LIO achieves superior accuracy and robustness compared to state-of-the-art methods, with an end-to-end drift of only 0.2 m in indoor degraded scenarios. The detailed and accurate point-cloud maps generated by SS-LIO reflect the smoothness and precision of trajectory estimation, with significantly reduced drift and deviation. These outcomes highlight the effectiveness of SS-LIO in addressing the SLAM challenges posed by solid-state LiDAR systems and its capability to produce reliable maps in complex indoor settings. Full article
(This article belongs to the Special Issue Advancements in Robotics: Perception, Manipulation, and Interaction)
Show Figures

Figure 1

22 pages, 13424 KB  
Article
Measurement of Fracture Networks in Rock Sample by X-Ray Tomography, Convolutional Filtering and Deep Learning
by Alessia Caputo, Maria Teresa Calcagni, Giovanni Salerno, Elisa Mammoliti and Paolo Castellini
Sensors 2025, 25(14), 4409; https://doi.org/10.3390/s25144409 - 15 Jul 2025
Cited by 2 | Viewed by 1598
Abstract
This study presents a comprehensive methodology for the detection and characterization of fractures in geological samples using X-ray computed tomography (CT). By combining convolution-based image processing techniques with advanced neural network-based segmentation, the proposed approach achieves high precision in identifying complex fracture networks. [...] Read more.
This study presents a comprehensive methodology for the detection and characterization of fractures in geological samples using X-ray computed tomography (CT). By combining convolution-based image processing techniques with advanced neural network-based segmentation, the proposed approach achieves high precision in identifying complex fracture networks. The method was applied to a marly limestone sample from the Maiolica Formation, part of the Umbria–Marche stratigraphic succession (Northern Apennines, Italy), a geological context where fractures often vary in size and contrast and are frequently filled with minerals such as calcite or clays, making their detection challenging. A critical part of the work involved addressing multiple sources of uncertainty that can impact fracture identification and measurement. These included the inherent spatial resolution limit of the CT system (voxel size of 70.69 μm), low contrast between fractures and the surrounding matrix, artifacts introduced by the tomographic reconstruction process (specifically the Radon transform), and noise from both the imaging system and environmental factors. To mitigate these challenges, we employed a series of preprocessing steps such as Gaussian and median filtering to enhance image quality and reduce noise, scanning from multiple angles to improve data redundancy, and intensity normalization to compensate for shading artifacts. The neural network segmentation demonstrated superior capability in distinguishing fractures filled with various materials from the host rock, overcoming the limitations observed in traditional convolution-based methods. Overall, this integrated workflow significantly improves the reliability and accuracy of fracture quantification in CT data, providing a robust and reproducible framework for the analysis of discontinuities in heterogeneous and complex geological materials. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Back to TopTop