Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,034)

Search Parameters:
Keywords = cloud point extraction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 12633 KB  
Article
Point Cloud Quality Assessment via Complexity-Driven Patch Sampling and Attention-Enhanced Swin-Transformer
by Xilei Shen, Qiqi Li, Renwei Tu, Yongqiang Bai, Di Ge and Zhongjie Zhu
Information 2026, 17(1), 93; https://doi.org/10.3390/info17010093 - 15 Jan 2026
Abstract
As an emerging immersive media format, point clouds (PC) inevitably suffer from distortions such as compression and noise, where even local degradations may severely impair perceived visual quality and user experience. It is therefore essential to accurately evaluate the perceived quality of PC. [...] Read more.
As an emerging immersive media format, point clouds (PC) inevitably suffer from distortions such as compression and noise, where even local degradations may severely impair perceived visual quality and user experience. It is therefore essential to accurately evaluate the perceived quality of PC. In this paper, a no-reference point cloud quality assessment (PCQA) method that uses complexity-driven patch sampling and an attention-enhanced Swin-Transformer is proposed to accurately assess the perceived quality of PC. Given that projected PC maps effectively capture distortions and that the quality-related information density varies significantly across local patches, a complexity-driven patch sampling strategy is proposed. By quantifying patch complexity, regions with higher information density are preferentially sampled to enhance subsequent quality-sensitive feature representation. Given that the indistinguishable response strengths between key and redundant channels during feature extraction may dilute effective features, an Attention-Enhanced Swin-Transformer is proposed to adaptively reweight critical channels, thereby improving feature extraction performance. Given that traditional regression heads typically use a single-layer linear mapping, which overlooks the heterogeneous importance of information across channels, a gated regression head is designed to enable adaptive fusion of global and statistical features via a statistics-guided gating mechanism. Experiments on the SJTU-PCQA dataset demonstrate that the proposed method consistently outperforms representative PCQA methods. Full article
Show Figures

Figure 1

23 pages, 11947 KB  
Article
Geometry-Consistency-Guided Unsupervised Domain Adaptation Framework for Cross-Voltage Transmission-Line Point-Cloud Semantic Segmentation
by Kun Ji, Hongwu Tan, Dabing Yang, Pu Wang, Di Cao, Yuan Gao and Zhou Yang
Electronics 2026, 15(2), 378; https://doi.org/10.3390/electronics15020378 - 15 Jan 2026
Abstract
Semantic segmentation of transmission-line point clouds is fundamental to intelligent power inspection and grid asset management, as segmentation accuracy directly influences defect detection and facility assessment tasks. However, transmission-line point clouds collected across different voltage levels often show significant variations in density and [...] Read more.
Semantic segmentation of transmission-line point clouds is fundamental to intelligent power inspection and grid asset management, as segmentation accuracy directly influences defect detection and facility assessment tasks. However, transmission-line point clouds collected across different voltage levels often show significant variations in density and geometric structure due to heterogeneous LiDAR sensors and flight configurations. Combined with the high cost of large-scale manual annotation, these factors limit the scalability of existing supervised segmentation methods. To overcome these challenges, we propose a geometry-consistency-guided unsupervised domain adaptation framework tailored for cross-voltage transmission-line point-cloud segmentation. The framework employs KPConvX as the backbone and integrates three progressive components. First, a geometric consistency constraint enhances robustness to spatial variations and enables extraction of structural features invariant across voltage levels. Second, a domain feature alignment module reduces distribution shifts through global feature transformation. Third, a minimum-entropy-based pseudo-label refinement strategy improves the reliability of pseudo-labels during self-training. Experiments on a multi-voltage transmission-line dataset demonstrate the effectiveness of the proposed method. With the KPConvX backbone, the framework achieves 66.1% mean Intersection over Union (mIoU) and 94.3% overall accuracy on the unlabeled 110 kV target domain, exceeding the source-only baseline by 15.6% mIoU and outperforming several state-of-the-art UDA methods. This work provides an efficient, annotation-friendly solution for cross-voltage point-cloud segmentation and offers a promising direction for domain adaptation in complex power-grid environments. Full article
(This article belongs to the Special Issue Advances in 3D Computer Vision and 3D Data Processing)
Show Figures

Figure 1

9 pages, 955 KB  
Proceeding Paper
LiDAR-Based 3D Mapping Approach for Estimating Tree Carbon Stock: A University Campus Case Study
by Abdul Samed Kaya, Aybuke Buksur, Yasemin Burcak and Hidir Duzkaya
Eng. Proc. 2026, 122(1), 8; https://doi.org/10.3390/engproc2026122008 - 15 Jan 2026
Abstract
This study aims to develop and demonstrate a low-cost LiDAR-based 3D mapping approach for estimating tree carbon stock in university campuses. Unlike conventional field-based measurements, which are labor-intensive and error-prone, the proposed system integrates a 2D LiDAR sensor with a servo motor and [...] Read more.
This study aims to develop and demonstrate a low-cost LiDAR-based 3D mapping approach for estimating tree carbon stock in university campuses. Unlike conventional field-based measurements, which are labor-intensive and error-prone, the proposed system integrates a 2D LiDAR sensor with a servo motor and odometry data to generate three-dimensional point clouds of trees. From these data, key biometric parameters such as diameter at breast height (DBH) and total height are automatically extracted and incorporated into species-specific and generalized allometric equations, in line with IPCC 2006/2019 guidelines, to estimate above-ground biomass, below-ground biomass, and total carbon storage. The experimental study is conducted over approximately 70,000 m2 of green space at Gazi University, Ankara, where six dominant species have been identified, including Cedrus libani, Pinus nigra, Platanus orientalis, and Ailanthus altissima. Results revealed a total carbon stock of 16.82 t C, corresponding to 61.66 t CO2eq. Among species, Cedrus libani (29,468.86 kg C) and Ailanthus altissima (13,544.83 kg C) showed the highest contributions, while Picea orientalis accounted for the lowest. The findings confirm that the proposed system offers a reliable, portable, cost-effective alternative to professional LiDAR scanners. This approach supports sustainable campus management and highlights the broader applicability of low-cost LiDAR technologies for urban carbon accounting and climate change mitigation strategies. Full article
Show Figures

Figure 1

24 pages, 39327 KB  
Article
Forest Surveying with Robotics and AI: SLAM-Based Mapping, Terrain-Aware Navigation, and Tree Parameter Estimation
by Lorenzo Scalera, Eleonora Maset, Diego Tiozzo Fasiolo, Khalid Bourr, Simone Cottiga, Andrea De Lorenzo, Giovanni Carabin, Giorgio Alberti, Alessandro Gasparetto, Fabrizio Mazzetto and Stefano Seriani
Machines 2026, 14(1), 99; https://doi.org/10.3390/machines14010099 - 14 Jan 2026
Abstract
Forest surveying and inspection face significant challenges due to unstructured environments, variable terrain conditions, and the high costs of manual data collection. Although mobile robotics and artificial intelligence offer promising solutions, reliable autonomous navigation in forest, terrain-aware path planning, and tree parameter estimation [...] Read more.
Forest surveying and inspection face significant challenges due to unstructured environments, variable terrain conditions, and the high costs of manual data collection. Although mobile robotics and artificial intelligence offer promising solutions, reliable autonomous navigation in forest, terrain-aware path planning, and tree parameter estimation remain open challenges. In this paper, we present the results of the AI4FOREST project, which addresses these issues through three main contributions. First, we develop an autonomous mobile robot, integrating SLAM-based navigation, 3D point cloud reconstruction, and a vision-based deep learning architecture to enable tree detection and diameter estimation. This system demonstrates the feasibility of generating a digital twin of forest while operating autonomously. Second, to overcome the limitations of classical navigation approaches in heterogeneous natural terrains, we introduce a machine learning-based surrogate model of wheel–soil interaction, trained on a large synthetic dataset derived from classical terramechanics. Compared to purely geometric planners, the proposed model enables realistic dynamics simulation and improves navigation robustness by accounting for terrain–vehicle interactions. Finally, we investigate the impact of point cloud density on the accuracy of forest parameter estimation, identifying the minimum sampling requirements needed to extract tree diameters and heights. This analysis provides support to balance sensor performance, robot speed, and operational costs. Overall, the AI4FOREST project advances the state of the art in autonomous forest monitoring by jointly addressing SLAM-based mapping, terrain-aware navigation, and tree parameter estimation. Full article
Show Figures

Figure 1

29 pages, 7092 KB  
Article
Dual-Branch Attention Photovoltaic Power Forecasting Model Integrating Ground-Based Cloud Image Features
by Lianglin Zou, Hongyang Quan, Jinguo He, Shuai Zhang, Ping Tang, Xiaoshi Xu and Jifeng Song
Energies 2026, 19(2), 409; https://doi.org/10.3390/en19020409 - 14 Jan 2026
Viewed by 21
Abstract
The photovoltaic field has seen significant development in recent years, with continuously expanding installation capacity and increasing grid integration. However, due to the intermittency of solar energy and meteorological variability, PV output power poses serious challenges to grid security and dispatch reliability. Traditional [...] Read more.
The photovoltaic field has seen significant development in recent years, with continuously expanding installation capacity and increasing grid integration. However, due to the intermittency of solar energy and meteorological variability, PV output power poses serious challenges to grid security and dispatch reliability. Traditional forecasting methods largely rely on modeling historical power and meteorological data, often neglecting the consideration of cloud movement, which constrains further improvement in prediction accuracy. To enhance prediction accuracy and model interpretability, this paper proposes a dual-branch attention-based PV power prediction model that integrates physical features from ground-based cloud images. Regarding input features, a cloud segmentation model is constructed based on the vision foundation model DINO encoder and an improved U-Net decoder to obtain cloud cover information. Based on deep feature point detection and an attention matching mechanism, cloud motion vectors are calculated to extract cloud motion speed and direction features. For feature processing, feature attention and temporal attention mechanisms are introduced, enabling the model to learn key meteorological factors and critical historical time steps. Structurally, a parallel architecture consisting of a linear branch and a nonlinear branch is adopted. A context-aware fusion module adaptively combines the prediction results from both branches, achieving collaborative modeling of linear trends and nonlinear fluctuations. Comparative experiments were conducted using two years of engineering data. Experimental results demonstrate that the proposed model outperforms the benchmarks across multiple metrics, validating the predictive advantages of the dual-branch structure that integrates physical features under complex weather conditions. Full article
(This article belongs to the Section A2: Solar Energy and Photovoltaic Systems)
Show Figures

Figure 1

29 pages, 7355 KB  
Article
A Flexible Wheel Alignment Measurement Method via APCS-SwinUnet and Point Cloud Registration
by Bo Shi, Hongli Liu and Emanuele Zappa
Metrology 2026, 6(1), 4; https://doi.org/10.3390/metrology6010004 - 12 Jan 2026
Viewed by 53
Abstract
To achieve low-cost and flexible wheel angles measurement, we propose a novel strategy that integrates wheel segmentation network with 3D vision. In this framework, a semantic segmentation network is first employed to extract the wheel rim, followed by angle estimation through ICP-based point [...] Read more.
To achieve low-cost and flexible wheel angles measurement, we propose a novel strategy that integrates wheel segmentation network with 3D vision. In this framework, a semantic segmentation network is first employed to extract the wheel rim, followed by angle estimation through ICP-based point cloud registration. Since wheel rim extraction is closely tied to angle computation accuracy, we introduce APCS-SwinUnet, a segmentation network built on the SwinUnet architecture and enhanced with ASPP, CBAM, and a hybrid loss function. Compared with traditional image processing methods in wheel alignment, APCS-SwinUnet delivers more accurate and refined segmentation, especially at wheel boundaries. Moreover, it demonstrates strong adaptability across diverse tire types and lighting conditions. Based on the segmented mask, the wheel rim point cloud is extracted, and an iterative closest point algorithm is then employed to register the target point cloud with a reference one. Taking the zero-angle condition as the reference, the rotation and translation matrices are obtained through point cloud registration. These matrices are subsequently converted into toe and camber angles via matrix-to-angle transformation. Experimental results verify that the proposed solution enables accurate angle measurement in a cost-effective, simple, and flexible manner. Furthermore, repeated experiments further validate its robustness and stability. Full article
(This article belongs to the Special Issue Applied Industrial Metrology: Methods, Uncertainties, and Challenges)
Show Figures

Graphical abstract

27 pages, 6082 KB  
Article
AGSM–CPA: Reliability-Aware Robustness for Rotation-Invariant Point Cloud Learning
by Mengyuan Ge, Shuocheng Wang, Yong Yang and Junfeng Yao
Mathematics 2026, 14(2), 278; https://doi.org/10.3390/math14020278 - 12 Jan 2026
Viewed by 176
Abstract
Rotation-invariant (RI) point cloud models aim to reduce sensitivity to viewpoint changes, but their performance still drops noticeably in real-world settings when local geometry is degraded by noise, occlusion, and uneven sampling. Once these disturbances propagate through deeper layers, they can lead to [...] Read more.
Rotation-invariant (RI) point cloud models aim to reduce sensitivity to viewpoint changes, but their performance still drops noticeably in real-world settings when local geometry is degraded by noise, occlusion, and uneven sampling. Once these disturbances propagate through deeper layers, they can lead to significant robustness degradation, especially for high-capacity RI backbones. To address this problem, we propose AGSM-CPA (Adaptive Geometric Signal Modulation with Cross-Perturbation Alignment), a lightweight and plug-and-play framework that enhances the robustness of RI models without altering their core convolutional operators. It integrates two complementary modules: the Geometric Signal-to-Noise Ratio (G-SNR) modulation mechanism, which adaptively suppresses unreliable neighborhoods based on local coordinate variance, and the Cross-Perturbation Semantic Consistency Alignment (CP-SCL) module, which enforces prediction consistency between weakly augmented inputs and strongly corrupted ones. We evaluate AGSM-CPA on ModelNet40, ScanObjectNN, and ShapeNetPart. Across standard corruption protocols, AGSM-CPA consistently improves robustness while maintaining competitive clean accuracy with negligible computational overhead. These results indicate that AGSM-CPA offers a practical, reliability-aware adapter for robust rotation-invariant point cloud learning. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

18 pages, 5467 KB  
Article
Automated Dimension Recognition and BIM Modeling of Frame Structures Based on 3D Point Clouds
by Fengyu Zhang, Jinyang Liu, Peizhen Li, Lin Chen and Qingsong Xiong
Electronics 2026, 15(2), 293; https://doi.org/10.3390/electronics15020293 - 9 Jan 2026
Viewed by 130
Abstract
Building information models (BIMs) serve as a foundational tool for digital management of existing structures. Traditional methods suffer from low automation and heavy reliance on manual intervention. This paper proposes an automated method for structural component dimension recognition and BIM modeling based on [...] Read more.
Building information models (BIMs) serve as a foundational tool for digital management of existing structures. Traditional methods suffer from low automation and heavy reliance on manual intervention. This paper proposes an automated method for structural component dimension recognition and BIM modeling based on 3D point cloud data. The proposed methodology follows a three-step workflow. First, the raw point cloud is semantically segmented using the PointNet++ deep learning network, and individual structural components are effectively isolated using the Fast Euclidean Clustering (FEC) algorithm. Second, the principal axis of each component is determined through Principal Component Analysis, and the Random Sample Consensus (RANSAC) algorithm is applied to fit the boundary lines of the projected cross-sections, enabling the automated extraction of geometric dimensions. Finally, an automated script maps the extracted geometric parameters to standard IFC entities to generate the BIM model. The experimental results demonstrate that the average dimensional error for beams and columns is within 3 mm, with the exception of specific occluded components. This study realizes the efficient transformation from point cloud data to BIM models through an automated workflow, providing reliable technical support for the digital reconstruction of existing buildings. Full article
Show Figures

Figure 1

19 pages, 5302 KB  
Article
LSSCC-Net: Integrating Spatial-Feature Aggregation and Adaptive Attention for Large-Scale Point Cloud Semantic Segmentation
by Wenbo Wang, Xianghong Hua, Cheng Li, Pengju Tian, Yapeng Wang and Lechao Liu
Symmetry 2026, 18(1), 124; https://doi.org/10.3390/sym18010124 - 8 Jan 2026
Viewed by 196
Abstract
Point cloud semantic segmentation is a key technology for applications such as autonomous driving, robotics, and virtual reality. Current approaches are heavily reliant on local relative coordinates and simplistic attention mechanisms to aggregate neighborhood information. This often leads to an ineffective joint representation [...] Read more.
Point cloud semantic segmentation is a key technology for applications such as autonomous driving, robotics, and virtual reality. Current approaches are heavily reliant on local relative coordinates and simplistic attention mechanisms to aggregate neighborhood information. This often leads to an ineffective joint representation of geometric perturbations and feature variations, coupled with a lack of adaptive selection for salient features during context fusion. On this basis, we propose LSSCC-Net, a novel segmentation framework based on LACV-Net. First, the spatial-feature dynamic aggregation module is designed to fuse offset information by symmetric interaction between spatial positions and feature channels, thus supplementing local structural information. Second, a dual-dimensional attention mechanism (spatial and channel) is introduced to symmetrically deploy attention modules in both the encoder and decoder, prioritizing salient information extraction. Finally, Lovász-Softmax Loss is used as an auxiliary loss to optimize the training objective. The proposed method is evaluated on two public benchmark datasets. The mIoU on the Toronto3D and S3DIS datasets is 83.6% and 65.2%, respectively. Compared with the baseline LACV-Net, LSSCC-Net showed notable improvements in challenging categories: the IoU for “road mark” and “fence” on Toronto3D increased by 3.6% and 8.1%, respectively. These results indicate that LSSCC-Net more accurately characterizes complex boundaries and fine-grained structures, enhancing segmentation capabilities for small-scale targets and category boundaries. Full article
Show Figures

Figure 1

24 pages, 6574 KB  
Article
Three-Dimensional Reconstruction and Scour Volume Detection of Offshore Wind Turbine Foundations Based on Side-Scan Sonar
by Yilong Wang, Lijia Tao, Mingxin Yuan and Jingjing Yang
Sensors 2026, 26(2), 386; https://doi.org/10.3390/s26020386 - 7 Jan 2026
Viewed by 167
Abstract
To enable timely, effective, and high-accuracy detection of scour around offshore wind turbine pile foundations, this study proposes a three-dimensional reconstruction and scour volume detection method based on side-scan sonar imagery. First, the sonar images of pile foundations are preprocessed through grayscale conversion, [...] Read more.
To enable timely, effective, and high-accuracy detection of scour around offshore wind turbine pile foundations, this study proposes a three-dimensional reconstruction and scour volume detection method based on side-scan sonar imagery. First, the sonar images of pile foundations are preprocessed through grayscale conversion, binarization, and region expansion and merging to obtain an effective grayscale representation of scour pits. An optimized Shape-from-Shading (SFS) method is then applied to reconstruct the three-dimensional geometry from the effective grayscale map, generating point cloud data of the scour pits. Subsequently, the point cloud data are filtered using curvature and normal vector constraints, followed by depth-based z-axis descent detection, clustering, and morphological restoration to extract individual scour pit point clouds. Finally, a weight-corrected AlphaShape algorithm is employed to accurately calculate the volume of each scour pit. Numerical experiments involving five simulated scour scenarios across three types demonstrate that the proposed method achieves accurate identification and extraction of scour pit point clouds, with an average volume measurement accuracy of 97.495% compared with theoretical values. Field measurements in real-world environments further validate the effectiveness of the proposed method for practical scour volume detection around offshore wind turbine foundations. Full article
(This article belongs to the Special Issue Advanced Sensing Techniques for Environmental and Energy Systems)
Show Figures

Figure 1

23 pages, 30920 KB  
Article
A Surface Defect Detection System for Industrial Conveyor Belt Inspection Using Apple’s TrueDepth Camera Technology
by Mohammad Siami, Przemysław Dąbek, Hamid Shiri, Tomasz Barszcz and Radosław Zimroz
Appl. Sci. 2026, 16(2), 609; https://doi.org/10.3390/app16020609 - 7 Jan 2026
Viewed by 156
Abstract
Maintaining the structural integrity of conveyor belts is essential for safe and reliable mining operations. However, these belts are susceptible to longitudinal tearing and surface degradation from material impact, fatigue, and deformation. Many computer vision-based inspection methods are inefficient and unreliable in harsh [...] Read more.
Maintaining the structural integrity of conveyor belts is essential for safe and reliable mining operations. However, these belts are susceptible to longitudinal tearing and surface degradation from material impact, fatigue, and deformation. Many computer vision-based inspection methods are inefficient and unreliable in harsh mining environments characterized by dust and variable lighting. This study introduces a smartphone-driven defect detection system for the cost-effective, geometric inspection of conveyor belt surfaces. Using Apple’s iPhone 12 Pro Max (Apple Inc., Cupertino, CA, USA), the system captures 3D point cloud data from a moving belt with induced damage via the integrated TrueDepth camera. A key innovation is a 3D-to-2D projection pipeline that converts point cloud data into structured representations compatible with standard 2D Convolutional Neural Networks (CNNs). We then propose a hybrid deep learning and machine learning model, where features extracted by pre-trained CNNs (VGG16, ResNet50, InceptionV3, Xception) are classified by ensemble methods (Random Forest, XGBoost, LightGBM). The proposed system achieves high detection accuracy exceeding 0.97 F1 score in the case of all proposed model implementations with TrueDepth F1 score over 0.05 higher than RGB approach. Applied cost-effective smartphone-based sensing platform proved to support near-real-time maintenance decisions. Laboratory results demonstrate the method’s reliability, with measurement errors for defect dimensions within 3 mm. This approach shows significant potential to improve conveyor belt management, reduce maintenance costs, and enhance operational safety. Full article
(This article belongs to the Special Issue Mining Engineering: Present and Future Prospectives)
Show Figures

Figure 1

22 pages, 1715 KB  
Article
A Semantic-Associated Factor Graph Model for LiDAR-Assisted Indoor Multipath Localization
by Bingxun Liu, Ke Han, Zhongliang Deng and Gan Guo
Sensors 2026, 26(1), 346; https://doi.org/10.3390/s26010346 - 5 Jan 2026
Viewed by 270
Abstract
In indoor environments where Global Navigation Satellite System (GNSS) signals are entirely blocked, wireless signals such as 5G and Ultra-Wideband (UWB) have become primary means for high-precision positioning. However, complex indoor structures lead to significant multipath effects, which severely constrain the improvement of [...] Read more.
In indoor environments where Global Navigation Satellite System (GNSS) signals are entirely blocked, wireless signals such as 5G and Ultra-Wideband (UWB) have become primary means for high-precision positioning. However, complex indoor structures lead to significant multipath effects, which severely constrain the improvement of positioning accuracy. Existing indoor positioning methods rarely link environmental semantic information (e.g., wall, column) to multipath error estimation, leading to inaccurate multipath correction—especially in complex scenes with multiple reflective objects. To address this issue, this paper proposes a LiDAR-assisted multipath estimation and positioning method. This method constructs a tightly coupled perception-positioning framework: first, a semantic-feature-based neural network for reflective surface detection is designed to accurately extract the geometric parameters of potential reflectors from LiDAR point clouds; subsequently, a unified factor graph model is established to multidimensionally associate and jointly infer terminal states, virtual anchor (VA) states, wireless signal measurements, and LiDAR-perceived reflector information, enabling dynamic discrimination and utilization of both line-of-sight (LOS) and non-line-of-sight (NLOS) paths. Experimental results demonstrate that the root mean square error (RMSE) of the proposed method is improved by 32.1% compared to traditional multipath compensation approaches. This research provides an effective solution for high-precision and robust positioning in complex indoor environments. Full article
(This article belongs to the Special Issue Advances in RFID-Based Indoor Positioning Systems)
Show Figures

Figure 1

20 pages, 2351 KB  
Article
A Slicer-Independent Framework for Measuring G-Code Accuracy in Medical 3D Printing
by Michel Beyer, Alexandru Burde, Andreas E. Roser, Maximiliane Beyer, Sead Abazi and Florian M. Thieringer
J. Imaging 2026, 12(1), 25; https://doi.org/10.3390/jimaging12010025 - 4 Jan 2026
Viewed by 220
Abstract
In medical 3D printing, accuracy is critical for fabricating patient-specific implants and anatomical models. Although printer performance has been widely examined, the influence of slicing software on geometric fidelity is less frequently quantified. The slicing step, which converts STL files into printer-readable G-code, [...] Read more.
In medical 3D printing, accuracy is critical for fabricating patient-specific implants and anatomical models. Although printer performance has been widely examined, the influence of slicing software on geometric fidelity is less frequently quantified. The slicing step, which converts STL files into printer-readable G-code, may introduce deviations that affect the final printed object. To quantify slicer-induced G-code deviations by comparing G-code-derived geometries with their reference STL modelsTwenty mandibular models were processed using five slicers (PrusaSlicer (version 2.9.1.), Cura (version 5.2.2.), Simplify3D (version 4.1.2.), Slic3r (version 1.3.0.) and Fusion 360 (version 2.0.19725)). A custom Python workflow converted the G-code into point clouds and reconstructed STL meshes through XY and Z corrections, marching cubes surface extraction, and volumetric extrusion. A calibration object enabled coordinate normalization across slicers. Accuracy was assessed using Mean Surface Distance (MSD), Root Mean Square (RMS) deviation, and Volume Difference. MSD ranged from 0.071 to 0.095 mm, and RMS deviation from 0.084 to 0.113 mm, depending on the slicer. Volumetric differences were slicer-dependent. PrusaSlicer yielded the highest surface accuracy; Simplify3D and Slic3r showed best repeatability. Fusion 360 produced the largest deviations. The slicers introduced geometric deviations below 0.1 mm that represent a substantial proportion of the overall error in the FDM workflow. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

22 pages, 4393 KB  
Article
An Open-Source, Low-Cost Solution for 3D Scanning
by Andrei Mateescu, Ioana Livia Stefan, Silviu Raileanu and Ioan Stefan Sacala
Sensors 2026, 26(1), 322; https://doi.org/10.3390/s26010322 - 4 Jan 2026
Viewed by 402
Abstract
With new applications continuously emerging in the fields of manufacturing, quality control and inspection, the need to develop three-dimensional (3D) scanning solutions suitable for industrial environments increases. 3D scanning is the process of analyzing one or more objects in order to convert and [...] Read more.
With new applications continuously emerging in the fields of manufacturing, quality control and inspection, the need to develop three-dimensional (3D) scanning solutions suitable for industrial environments increases. 3D scanning is the process of analyzing one or more objects in order to convert and store the object’s features in a digital format. Due to the increased costs of industrial 3D scanning solutions, this paper proposes an open-source, low-cost architecture for obtaining a 3D model that can be used in manufacturing, which involves a linear laser beam that is swept across the object via a rotating mirror, and a camera that grabs images, to further be used to extract the dimensions of the object through a technique inspired by laser triangulation. The 3D models for several objects are obtained, analyzed and compared to the dimensions of their respective real-world counterparts. For the tested objects, the proposed system yields a maximum mean height error of 2.56 mm, a maximum mean length error of 1.48 mm and a maximum mean width error of 1.30 mm on the raw point cloud and a scanning time of ∼4 s per laser line. Finally, a few observations and ways to improve the proposed solution are mentioned. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensing Technology in Smart Manufacturing)
Show Figures

Figure 1

26 pages, 12124 KB  
Article
MF-GCN: Multimodal Information Fusion Using Incremental Graph Convolutional Network for Ship Behavior Anomaly Detection
by Ruixin Ma, Jinhao Zhang, Weizhi Nie, Naiming Ge, Hao Wen and Aoxiang Liu
J. Mar. Sci. Eng. 2026, 14(1), 87; https://doi.org/10.3390/jmse14010087 - 1 Jan 2026
Viewed by 196
Abstract
Ship behavior anomaly detection is critical for intelligent perception and early warning in complex inland waterways, where single-source sensing (e.g., AIS-only or vision-only) is often fragile under occlusion, illumination variation, and signal noise. This study proposes MF-GCN, a multimodal (heterogeneous) information fusion framework [...] Read more.
Ship behavior anomaly detection is critical for intelligent perception and early warning in complex inland waterways, where single-source sensing (e.g., AIS-only or vision-only) is often fragile under occlusion, illumination variation, and signal noise. This study proposes MF-GCN, a multimodal (heterogeneous) information fusion framework based on an Incremental Graph Convolutional Network (IGCN) to detect and warn anomalous ship behaviors by jointly modeling AIS, video imagery, LiDAR point clouds, and water level signals. We first extract modality-specific features and enforce temporal–spatial consistency via timestamp and geo-referencing alignment, then construct an evolving graph in which nodes represent multimodal features and edges encode temporal dependency and semantic similarity. MF-GCN integrates a Semantic Clustering-based GCN (S-GCN) to inject historical semantic context and an Attentive Fusion-based GCN (A-GCN) to learn dynamic cross-modal correlations using multi-head attention. Experiments on our constructed real-world datasets demonstrate that MF-GCN achieves accuracies of 93.8%, 93.8%, and 93.3% with F1-scores of 93.6%, 93.6%, and 93.3% for ship deviation warning, bridge-crossing warning, and inter-ship collision warning, respectively, consistently outperforming representative baselines. These results verify the effectiveness of the proposed method for robust multimodal anomaly detection and early warning in inland-waterway scenarios. Full article
(This article belongs to the Special Issue Emerging Computational Methods in Intelligent Marine Vehicles)
Show Figures

Figure 1

Back to TopTop