Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (545)

Search Parameters:
Keywords = geometric uncertainties

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
48 pages, 1973 KB  
Review
A Review on Reverse Engineering for Sustainable Metal Manufacturing: From 3D Scans to Simulation-Ready Models
by Elnaeem Abdalla, Simone Panfiglio, Mariasofia Parisi and Guido Di Bella
Appl. Sci. 2026, 16(3), 1229; https://doi.org/10.3390/app16031229 (registering DOI) - 25 Jan 2026
Abstract
Reverse engineering (RE) has been increasingly adopted in metal manufacturing to digitize legacy parts, connect “as-is” geometry to mechanical performance, and enable agile repair and remanufacturing. This review consolidates scan-to-simulation workflows that transform 3D measurement data (optical/laser scanning and X-ray computed tomography) into [...] Read more.
Reverse engineering (RE) has been increasingly adopted in metal manufacturing to digitize legacy parts, connect “as-is” geometry to mechanical performance, and enable agile repair and remanufacturing. This review consolidates scan-to-simulation workflows that transform 3D measurement data (optical/laser scanning and X-ray computed tomography) into simulation-ready models for structural assessment and manufacturing decisions, with an explicit focus on sustainability. Key steps are reviewed, from acquisition planning and metrological error sources to point-cloud/mesh processing, CAD/feature reconstruction, and geometry preparation for finite-element analysis (watertightness, defeaturing, meshing strategies, and boundary condition transfer). Special attention is given to uncertainty quantification and the propagation of geometric deviations into stress, stiffness, and fatigue predictions, enabling robust accept/reject and repair/replace choices. Sustainability is addressed through a lightweight reporting framework covering material losses, energy use, rework, and lead time across the scan–model–simulate–manufacture chain, clarifying when digitalization reduces scrap and over-processing. Industrial use cases are discussed for high-value metal components (e.g., molds, turbine blades, and marine/energy parts) where scan-informed simulation supports faster and more reliable decision making. Open challenges are summarized, including benchmark datasets, standardized reporting, automation of feature recognition, and integration with repair process simulation (DED/WAAM) and life-cycle metrics. A checklist is proposed to improve reproducibility and comparability across RE studies. Full article
(This article belongs to the Section Mechanical Engineering)
Show Figures

Figure 1

38 pages, 9992 KB  
Article
Learning-Based Multi-Objective Optimization of Parametric Stadium-Type Tiered-Seating Configurations
by Metin Arel and Fikret Bademci
Mathematics 2026, 14(3), 410; https://doi.org/10.3390/math14030410 (registering DOI) - 24 Jan 2026
Abstract
Parametric tiered-seating design can be framed as a constrained multi-objective optimization problem in which a low-dimensional decision vector is evaluated by a deterministic operator with sequential feasibility rejection and visibility constraints. This study introduces an oracle-preserving, learning-assisted screening workflow, where a multi-output multilayer [...] Read more.
Parametric tiered-seating design can be framed as a constrained multi-objective optimization problem in which a low-dimensional decision vector is evaluated by a deterministic operator with sequential feasibility rejection and visibility constraints. This study introduces an oracle-preserving, learning-assisted screening workflow, where a multi-output multilayer perceptron (MLP) is used only to prioritize candidates for evaluation. Here, multi-output denotes a single network trained to predict the full objective vector jointly. Candidates are sampled within bounded decision ranges and evaluated by an operator that propagates section-coupled geometric state and enforces hard clearance thresholds through a Vertical Sightline System (VSS), i.e., a deterministic row-wise sightline/clearance evaluator that enforces hard clearance thresholds. The oracle-evaluated set is reduced to its mixed-direction Pareto-efficient subset and filtered by feature-space proximity to a fixed validation reference using nearest-neighbor distances in standardized 11-dimensional features, yielding a robustness-oriented pool. A compact shortlist is derived via TOPSIS (Technique for Order Preference by Similarity to an Ideal Solution; used here strictly as a post-Pareto decision-support ranking rule), and preference uncertainty is assessed by Monte Carlo weight sampling from a symmetric Dirichlet distribution. In an archived run under a fixed oracle budget, 1235 feasible designs are evaluated, producing 934 evaluated Pareto solutions; proximity filtering retains 187 robust candidates and TOPSIS reports a traceable top-30 shortlist. Stability is supported by concentrated top-k frequencies under weight perturbations and by audits under single-feature-drop ablations and tested rounding precisions. Overall, the workflow enables reproducible multi-objective screening and reporting for feasibility-dominated seating design. Full article
Show Figures

Figure 1

21 pages, 5688 KB  
Article
Investigation of the Mechanical Characteristics of Linear Rolling Guides Considering Multiple Errors
by Cheng Huang, Wentao Zhou, Wanli Liu, Yupeng Yi, Lei Shi, Rulin Xiong, Xiaobing Li and Xing Du
Lubricants 2026, 14(1), 46; https://doi.org/10.3390/lubricants14010046 - 22 Jan 2026
Viewed by 11
Abstract
Existing research on the linear rolling guide has predominantly focused on performance under ideal conditions or isolated error types, while systematic studies concerning multi-error coupling mechanisms and their impact on internal contact parameters remain limited. To address this, a comprehensive static model based [...] Read more.
Existing research on the linear rolling guide has predominantly focused on performance under ideal conditions or isolated error types, while systematic studies concerning multi-error coupling mechanisms and their impact on internal contact parameters remain limited. To address this, a comprehensive static model based on Hertz contact theory is proposed that simultaneously incorporates ball diameter, raceway radius, and raceway curvature center distance errors. This model is validated using finite element analysis (FEA) in ABAQUS, and the numerical results verify the feasibility and effectiveness of the proposed analytical model. Analysis of single, combined, and random errors indicates that geometric errors significantly influence vertical stiffness, load distribution, and critical load-carrying capacity. For example, as the ball diameter error varies from −2.5 to 2.5 μm, the vertical stiffness increases by a factor of 3.8, while a representative negative error combination reduces the critical load by nearly 40%. Additionally, random error analysis reveals that larger manufacturing tolerance ranges lead to increased fluctuation in ball contact forces, raising performance uncertainty. These findings establish the proposed model as a theoretical foundation for the precision design and load-bearing assessment of linear rolling guides under static conditions. Full article
Show Figures

Figure 1

16 pages, 3906 KB  
Article
S3PM: Entropy-Regularized Path Planning for Autonomous Mobile Robots in Dense 3D Point Clouds of Unstructured Environments
by Artem Sazonov, Oleksii Kuchkin, Irina Cherepanska and Arūnas Lipnickas
Sensors 2026, 26(2), 731; https://doi.org/10.3390/s26020731 (registering DOI) - 21 Jan 2026
Viewed by 96
Abstract
Autonomous navigation in cluttered and dynamic industrial environments remains a major challenge for mobile robots. Traditional occupancy-grid and geometric planning approaches often struggle in such unstructured settings due to partial observability, sensor noise, and the frequent presence of moving agents (machinery, vehicles, humans). [...] Read more.
Autonomous navigation in cluttered and dynamic industrial environments remains a major challenge for mobile robots. Traditional occupancy-grid and geometric planning approaches often struggle in such unstructured settings due to partial observability, sensor noise, and the frequent presence of moving agents (machinery, vehicles, humans). These limitations seriously undermine long-term reliability and safety compliance—both essential for Industry 4.0 applications. This paper introduces S3PM, a lightweight entropy-regularized framework for simultaneous mapping and path planning that operates directly on dense 3D point clouds. Its key innovation is a dynamics-aware entropy field that fuses per-voxel occupancy probabilities with motion cues derived from residual optical flow. Each voxel is assigned a risk-weighted entropy score that accounts for both geometric uncertainty and predicted object dynamics. This representation enables (i) robust differentiation between reliable free space and ambiguous/hazardous regions, (ii) proactive collision avoidance, and (iii) real-time trajectory replanning. The resulting multi-objective cost function effectively balances path length, smoothness, safety margins, and expected information gain, while maintaining high computational efficiency through voxel hashing and incremental distance transforms. Extensive experiments in both real-world and simulated settings, conducted on a Raspberry Pi 5 (with and without the Hailo-8 NPU), show that S3PM achieves 18–27% higher IoU in static/dynamic segmentation, 0.94–0.97 AUC in motion detection, and 30–45% fewer collisions compared to OctoMap + RRT* and standard probabilistic baselines. The full pipeline runs at 12–15 Hz on the bare Pi 5 and 25–30 Hz with NPU acceleration, making S3PM highly suitable for deployment on resource-constrained embedded platforms. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing—2nd Edition)
Show Figures

Figure 1

35 pages, 4364 KB  
Article
Pedestrian Traffic Stress Levels (PTSL) in School Zones: A Pedestrian Safety Assessment for Sustainable School Environments—Evidence from the Caferağa Case Study
by Yunus Emre Yılmaz and Mustafa Gürsoy
Sustainability 2026, 18(2), 1042; https://doi.org/10.3390/su18021042 - 20 Jan 2026
Viewed by 85
Abstract
Pedestrian safety in school zones is shaped by traffic conditions and street design characteristics, whose combined effects involve uncertainty and gradual transitions rather than sharp thresholds. This study presents an integrated assessment framework based on the analytic hierarchy process (AHP) and fuzzy logic [...] Read more.
Pedestrian safety in school zones is shaped by traffic conditions and street design characteristics, whose combined effects involve uncertainty and gradual transitions rather than sharp thresholds. This study presents an integrated assessment framework based on the analytic hierarchy process (AHP) and fuzzy logic to evaluate pedestrian traffic stress level (PTSL) at the street-segment scale in school environments. AHP is used to derive input-variable weights from expert judgments, while a Mamdani-type fuzzy inference system models the relationships between traffic and geometric variables and pedestrian stress. The model incorporates vehicle density, pedestrian density, lane width, sidewalk width, buffer zone, and estimated traffic flow speed as input variables, represented using triangular membership functions. Genetic Algorithm (GA) optimization is applied to calibrate membership-function parameters, improving numerical consistency without altering the linguistic structure of the model. A comprehensive rule base is implemented in MATLAB (R2024b) to generate a continuous traffic stress score ranging from 0 to 10. The framework is applied to street segments surrounding major schools in the study area, enabling comparison of spatial variations in pedestrian stress. The results demonstrate how combinations of traffic intensity and street geometry influence stress levels, supporting data-driven pedestrian safety interventions for sustainable school environments and low-stress urban mobility. Full article
Show Figures

Figure 1

20 pages, 8055 KB  
Article
Research on an Underwater Visual Enhancement Method Based on Adaptive Parameter Optimization in a Multi-Operator Framework
by Zhiyong Yang, Shengze Yang, Yuxuan Fu and Hao Jiang
Sensors 2026, 26(2), 668; https://doi.org/10.3390/s26020668 - 19 Jan 2026
Viewed by 138
Abstract
Underwater images often suffer from luminance attenuation, structural degradation, and color distortion due to light absorption and scattering in water. The variations in illumination and color distribution across different water bodies further increase the uncertainty of these degradations, making traditional enhancement methods that [...] Read more.
Underwater images often suffer from luminance attenuation, structural degradation, and color distortion due to light absorption and scattering in water. The variations in illumination and color distribution across different water bodies further increase the uncertainty of these degradations, making traditional enhancement methods that rely on fixed parameters, such as underwater dark channel prior (UDCP) and histogram equalization (HE), unstable in such scenarios. To address these challenges, this paper proposes a multi-operator underwater image enhancement framework with adaptive parameter optimization. To achieve luminance compensation, structural detail enhancement, and color restoration, a collaborative enhancement pipeline was constructed using contrast-limited adaptive histogram equalization (CLAHE) with highlight protection, texture-gated and threshold-constrained unsharp masking (USM), and mild saturation compensation. Building upon this pipeline, an adaptive multi-operator parameter optimization strategy was developed, where a unified scoring function jointly considers feature gains, geometric consistency of feature matches, image quality metrics, and latency constraints to dynamically adjust the CLAHE clip limit, USM gain, and Gaussian scale under varying water conditions. Subjective visual comparisons and quantitative experiments were conducted on several public underwater datasets. Compared with conventional enhancement methods, the proposed approach achieved superior structural clarity and natural color appearance on the EUVP and UIEB datasets, and obtained higher quality metrics on the RUIE dataset (Average Gradient (AG) = 0.5922, Underwater Image Quality Measure (UIQM) = 2.095). On the UVE38K dataset, the proposed adaptive optimization method improved the oriented FAST and rotated BRIEF (ORB) feature counts by 12.5%, inlier matches by 9.3%, and UIQM by 3.9% over the fixed-parameter baseline, while the adjacent-frame matching visualization and stability metrics such as inlier ratio further verified the geometric consistency and temporal stability of the enhanced features. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

29 pages, 6574 KB  
Article
Modeling Landslide Dam Breach Due to Overtopping and Seepage: Development and Model Evaluation
by Tianlong Zhao, Xiong Hu, Changjing Fu, Gangyong Song, Liucheng Su and Yuanyang Chu
Sustainability 2026, 18(2), 915; https://doi.org/10.3390/su18020915 - 15 Jan 2026
Viewed by 208
Abstract
Landslide dams, typically composed of newly deposited, loose, and heterogeneous materials, are highly susceptible to failure induced by overtopping and seepage, particularly under extreme hydrological conditions. Accurate prediction of such breaching processes is essential for flood risk management and emergency response, yet existing [...] Read more.
Landslide dams, typically composed of newly deposited, loose, and heterogeneous materials, are highly susceptible to failure induced by overtopping and seepage, particularly under extreme hydrological conditions. Accurate prediction of such breaching processes is essential for flood risk management and emergency response, yet existing models generally consider only a single failure mechanism. This study develops a mathematical model to simulate landslide dam breaching under the coupled action of overtopping and seepage erosion. The model integrates surface erosion and internal erosion processes within a unified framework and employs a stable time-stepping numerical scheme. Application to three real-world landslide dam cases demonstrates that the model successfully reproduces key breaching characteristics across overtopping-only, seepage-only, and coupled erosion scenarios. The simulated breach hydrographs, reservoir water levels, and breach geometries show good agreement with field observations, with peak outflow and breach timing predicted with errors generally within approximately 5%. Sensitivity analysis further indicates that the model is robust to geometric uncertainties, as variations in breach outcomes remain smaller than the imposed parameter perturbations. These results confirm that explicitly accounting for the coupled interaction between overtopping and seepage significantly improves the representation of complex breaching processes. The proposed model therefore provides a reliable computational tool for analyzing landslide dam failures and supports more accurate hazard assessment under multi-mechanism erosion conditions. Full article
(This article belongs to the Section Hazards and Sustainability)
Show Figures

Figure 1

24 pages, 11080 KB  
Article
Graph-Based and Multi-Stage Constraints for Hand–Object Reconstruction
by Wenrun Wang, Jianwu Dang, Yangping Wang and Hui Yu
Sensors 2026, 26(2), 535; https://doi.org/10.3390/s26020535 - 13 Jan 2026
Viewed by 182
Abstract
Reconstructing hand and object shapes from a single view during interaction remains challenging due to severe mutual occlusion and the need for high physical plausibility. To address this, we propose a novel framework for hand–object interaction reconstruction based on holistic, multi-stage collaborative optimization. [...] Read more.
Reconstructing hand and object shapes from a single view during interaction remains challenging due to severe mutual occlusion and the need for high physical plausibility. To address this, we propose a novel framework for hand–object interaction reconstruction based on holistic, multi-stage collaborative optimization. Unlike methods that process hands and objects independently or apply constraints as late-stage post-processing, our model progressively enforces physical consistency and geometric accuracy throughout the entire reconstruction pipeline. Our network takes an RGB-D image as input. An adaptive feature fusion module first combines color and depth information to improve robustness against sensing uncertainties. We then introduce structural priors for 2D pose estimation and leverage texture cues to refine depth-based 3D pose initialization. Central to our approach is the iterative application of a dense mutual attention mechanism during sparse-to-dense mesh recovery, which dynamically captures interaction dependencies while refining geometry. Finally, we use a Signed Distance Function (SDF) representation explicitly designed for contact surfaces to prevent interpenetration and ensure physically plausible results. Through comprehensive experiments, our method demonstrates significant improvements on the challenging ObMan and DexYCB benchmarks, outperforming state-of-the-art techniques. Specifically, on the ObMan dataset, our approach achieves hand CDh and object CDo metrics of 0.077 cm2 and 0.483 cm2, respectively. Similarly, on the DexYCB dataset, it attains hand CDh and object CDo values of 0.251 cm2 and 1.127 cm2, respectively. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

16 pages, 2843 KB  
Article
Analysis of a Fiber-Coupled RGB Color Sensor for Luminous Flux Measurement of LEDs
by László-Zsolt Turos and Géza Csernáth
Sensors 2026, 26(2), 486; https://doi.org/10.3390/s26020486 - 12 Jan 2026
Viewed by 210
Abstract
Accurate measurement of luminous flux from solid-state light sources typically requires spectroradiometric equipment or integrating spheres. This work investigates a compact alternative based on a fiber-coupled RGB photodiode system and develops the optical, spectral, and geometric foundations required to obtain traceable flux estimates [...] Read more.
Accurate measurement of luminous flux from solid-state light sources typically requires spectroradiometric equipment or integrating spheres. This work investigates a compact alternative based on a fiber-coupled RGB photodiode system and develops the optical, spectral, and geometric foundations required to obtain traceable flux estimates from reduced-channel measurements. The system under study comprises an LED with known spectral power distribution (SPD), optical head, optical fiber, a protective sensor window, and a photodiode matrix type sensor. A complete end-to-end analysis of the optical path is presented, including geometric coupling efficiency, fiber transmission and angular redistribution, Fresnel losses in the sensor window, and the mosaic structure of the sensor. Additional effects such as fiber–sensor alignment, fiber-facet tilt, air gaps, and LED placement tolerances are quantified and incorporated into a formal uncertainty budget. Using the manufacturer-supplied SPD of the reference LED together with the measured R, G, and B channel responsivity functions of the sensor, a calibration-based mapping is established to reconstruct photopic luminous flux from the three-channel outputs. These results demonstrate that, with appropriate modeling and calibration of all optical stages, a fiber-coupled RGB photodiode mosaic can provide practical and scientifically meaningful luminous-flux estimation for white LEDs, offering a portable and cost-effective alternative to conventional photometric instrumentation in mid-accuracy applications. Further optimization of computation speed can enable fully integrated measurement systems in resource-constrained environments. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

44 pages, 17655 KB  
Article
Adaptive Traversability Policy Optimization for an Unmanned Articulated Road Roller on Slippery, Geometrically Irregular Terrains
by Wei Qiang, Quanzhi Xu and Hui Xie
Machines 2026, 14(1), 79; https://doi.org/10.3390/machines14010079 - 8 Jan 2026
Viewed by 189
Abstract
To address the autonomous traversability challenge of an Unmanned Articulated Road Roller (UARR) operating on harsh terrains where low-adhesion slipperiness and geometric irregularities are coupled, and traction capacity is severely limited, this paper proposes a Terrain-Adaptive Maximum-Entropy Policy Optimization (TAMPO). A unified multi-physics [...] Read more.
To address the autonomous traversability challenge of an Unmanned Articulated Road Roller (UARR) operating on harsh terrains where low-adhesion slipperiness and geometric irregularities are coupled, and traction capacity is severely limited, this paper proposes a Terrain-Adaptive Maximum-Entropy Policy Optimization (TAMPO). A unified multi-physics simulation platform is constructed, integrating a high-fidelity vehicle dynamics model with a parameterized terrain environment. Considering the prevalence of geometric irregularities in construction sites, a parameterized mud-pit model is established—generalized from a representative case—as a canonical physical model and simulation carrier for this class of traversability problems. Based on this model, a family of training and test scenarios is generated to span a broad range of terrain shapes and adhesion conditions. On this foundation, the TAMPO algorithm is introduced to enhance vehicle traversability on complex terrains. The method comprises the following: (i) a Terrain Interaction-Critical Reward (TICR), which combines dense rewards representing task progress with sparse rewards that encourage terrain exploration, guiding the agent to both climb efficiently and actively seek high-adhesion favorable terrain; and (ii) a context-aware adaptive entropy-regularization mechanism that fuses, in real time, three feedback signals—terrain physical difficulty, task-execution efficacy, and model epistemic uncertainty—to dynamically regulate policy entropy and realize an intelligent, state-dependent exploration–exploitation trade-off in unstructured environments. The performance and generalization ability of TAMPO are evaluated on training, interpolation, and extrapolation sets, using PPO, SAC, and DDPG as baselines. On 90 highly challenging extrapolation scenarios, TAMPO achieves an average success rate (S.R.) of 60.00% and an Average Escape Time (A.E.T.) of 17.56 s, corresponding to improvements of up to 22.22% in S.R. and reductions of up to 5.73 s in A.E.T. over the baseline algorithms, demonstrating superior decision-making performance and robust generalization on coupled slippery and irregular terrains. Full article
(This article belongs to the Special Issue Modeling, Estimation, Control, and Decision for Intelligent Vehicles)
Show Figures

Figure 1

16 pages, 6033 KB  
Article
Automated Lunar Crater Detection with Edge-Based Feature Extraction and Robust Ellipse Refinement
by Ahmed Elaksher, Islam Omar and Fuad Ahmad
Aerospace 2026, 13(1), 62; https://doi.org/10.3390/aerospace13010062 - 8 Jan 2026
Viewed by 260
Abstract
Automated detection of impact craters is essential for planetary surface studies, yet it remains a challenging task due to variable morphology, degraded rims, complex geological settings, and inconsistent illumination conditions. This study presents a novel crater detection methodology designed for large-scale analysis of [...] Read more.
Automated detection of impact craters is essential for planetary surface studies, yet it remains a challenging task due to variable morphology, degraded rims, complex geological settings, and inconsistent illumination conditions. This study presents a novel crater detection methodology designed for large-scale analysis of Lunar Reconnaissance Orbiter Wide-Angle Camera (WAC) imagery. The framework integrates several key components: automatic region-of-interest (ROI) selection to constrain the search space, Canny edge detection to enhance crater rims while suppressing background noise, and a modified Hough transform that efficiently localizes elliptical features by restricting votes to edge points validated through local fitting. Candidate ellipses are then refined through a two-stage adjustment, beginning with L1-norm fitting to suppress the influence of outliers and fragmented edges, followed by least-squares optimization to improve geometric accuracy and stability. The methodology was tested on four representative Wide-Angle Camera (WAC) sites selected to cover a range of crater sizes (between ~1 km and 50 km), shapes, and geological contexts. The results showed detection rates between 82% and 91% of manually identified craters, with an overall mean of 87%. Covariance analysis confirmed significant reductions in parameter uncertainties after refinement, with standard deviations for center coordinates, shape parameters, and orientation consistently decreasing from the L1 to the L2 stage. These findings highlight the effectiveness and computational efficiency of the proposed approach, providing a reliable tool for automated crater detection, lunar morphology studies, and future applications to other planetary datasets. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

18 pages, 7305 KB  
Article
SERail-SLAM: Semantic-Enhanced Railway LiDAR SLAM
by Weiwei Song, Shiqi Zheng, Xinye Dai, Xiao Wang, Yusheng Wang, Zihao Wang, Shujie Zhou, Wenlei Liu and Yidong Lou
Machines 2026, 14(1), 72; https://doi.org/10.3390/machines14010072 - 7 Jan 2026
Viewed by 308
Abstract
Reliable state estimation in railway environments presents significant challenges due to geometric degeneracy resulting from repetitive structural layouts and point cloud sparsity caused by high-speed motion. Conventional LiDAR-based SLAM systems frequently suffer from longitudinal drift and mapping artifacts when operating in such feature-scarce [...] Read more.
Reliable state estimation in railway environments presents significant challenges due to geometric degeneracy resulting from repetitive structural layouts and point cloud sparsity caused by high-speed motion. Conventional LiDAR-based SLAM systems frequently suffer from longitudinal drift and mapping artifacts when operating in such feature-scarce and dynamically complex scenarios. To address these limitations, this paper proposes SERail-SLAM, a robust semantic-enhanced multi-sensor fusion framework that tightly couples LiDAR odometry, inertial pre-integration, and GNSS constraints. Unlike traditional approaches that rely on rigid voxel grids or binary semantic masking, we introduce a Semantic-Enhanced Adaptive Voxel Map. By leveraging eigen-decomposition of local point distributions, this mapping strategy dynamically preserves fine-grained stable structures while compressing redundant planar surfaces, thereby enhancing spatial descriptiveness. Furthermore, to mitigate the impact of environmental noise and segmentation uncertainty, a confidence-aware filtering mechanism is developed. This method utilizes raw segmentation probabilities to adaptively weight input measurements, effectively distinguishing reliable landmarks from clutter. Finally, a category-weighted joint optimization scheme is implemented, where feature associations are constrained by semantic stability priors, ensuring globally consistent localization. Extensive experiments in real-world railway datasets demonstrate that the proposed system achieves superior accuracy and robustness compared to state-of-the-art geometric and semantic SLAM methods. Full article
(This article belongs to the Special Issue Dynamic Analysis and Condition Monitoring of High-Speed Trains)
Show Figures

Figure 1

14 pages, 1173 KB  
Technical Note
Three Methods for Combining Probability Distributions and an Alternative to Random-Effects Meta-Analysis
by Hening Huang
Metrology 2026, 6(1), 1; https://doi.org/10.3390/metrology6010001 - 4 Jan 2026
Viewed by 154
Abstract
Many fields or disciplines (e.g., uncertainty analysis in measurement science) require a combination of probability distributions. This technical note examines three methods for combining probability distributions: weighted linear pooling, geometric pooling, and the law of combination of distributions (LCD). Although these methods have [...] Read more.
Many fields or disciplines (e.g., uncertainty analysis in measurement science) require a combination of probability distributions. This technical note examines three methods for combining probability distributions: weighted linear pooling, geometric pooling, and the law of combination of distributions (LCD). Although these methods have been discussed in the literature, a systematic comparison of them appears insufficient. In particular, there is no discussion in the literature regarding the potential information loss that these methods may cause. This technical note aims to fill this gap. It provides insights into these three methods under the normality assumption. It shows that the weighted linear pooling method preserves all the variability (including heterogeneity) information in the original distributions; neither the geometric pooling method nor the LCD method preserves all the variability information, leading to information loss. We propose an index for measuring the information loss of a method with respect to the weighted linear pooling method. This technical note also shows that the weighted linear pooling method can be used as an alternative to the traditional random-effects meta-analysis. Three examples are presented: the combination of two normal distributions, the combination of three discrete distributions, and the determination of the Newtonian constant of gravitation. Full article
Show Figures

Figure 1

19 pages, 5002 KB  
Article
Deep Learning-Based Diffraction Identification and Uncertainty-Aware Adaptive Weighting for GNSS Positioning in Occluded Environments
by Chenhui Wang, Haoliang Shen, Yanyan Liu, Qingjia Meng and Chuang Qian
Remote Sens. 2026, 18(1), 158; https://doi.org/10.3390/rs18010158 - 3 Jan 2026
Viewed by 293
Abstract
In natural canyons and urban occluded environments, signal anomalies induced by the satellite diffraction effect are a critical error source affecting the positioning accuracy of deformation monitoring. This paper proposes a deep learning-based method for diffraction signal identification and mitigation. The method utilizes [...] Read more.
In natural canyons and urban occluded environments, signal anomalies induced by the satellite diffraction effect are a critical error source affecting the positioning accuracy of deformation monitoring. This paper proposes a deep learning-based method for diffraction signal identification and mitigation. The method utilizes a LSTM network to deeply mine the time-series characteristics of GNSS observation data. We systematically analyze the impact of azimuth, elevation, SNR, and multi-feature combinations on model recognition performance, demonstrating that single features suffer from incomplete information or poor discrimination. Experimental results show that the multi-dimensional feature scheme of “SNR + Elevation + Azimuth” effectively characterizes both signal strength and spatial geometric information, achieving complementary feature advantages. The overall recognition accuracy of the proposed method reaches 84.2%, with an accuracy of 88.0% for anomalous satellites that severely impact positioning precision. Furthermore, we propose an Adaptive Weighting Method for Diffraction Mitigation Based on Uncertainty Quantification. This method constructs a variance inflation model using the probability vector output from the LSTM Softmax layer and introduces Information Entropy to quantify prediction uncertainty, ensuring that the weighting model possesses protection capability when the model fails or is uncertain. In processing a set of GNSS data collected in a highly-occluded environment, the proposed method significantly outperforms traditional cut-off elevation and SNR mask strategies, improving the AFR to 99.9%, and enhancing the positioning accuracy in the horizontal and vertical directions by an average of 80.1% and 76.4%, respectively, thereby effectively boosting the positioning accuracy and reliability in occluded environments. Full article
Show Figures

Figure 1

30 pages, 4065 KB  
Article
Capacity Optimization of Integrated Energy Systems Considering Carbon-Green Certificate Trading and Electricity Price Fluctuations
by Tiannan Ma, Gang Wu, Hao Luo, Bin Su, Yapeng Dai and Xin Zou
Processes 2026, 14(1), 142; https://doi.org/10.3390/pr14010142 - 31 Dec 2025
Viewed by 345
Abstract
In order to study the impacts of the carbon-green certificate trading mechanism and the fluctuation of feed-in tariffs on the low-carbon and economic aspects of the investment and operation of the integrated energy system, and to transform the system carbon emission into a [...] Read more.
In order to study the impacts of the carbon-green certificate trading mechanism and the fluctuation of feed-in tariffs on the low-carbon and economic aspects of the investment and operation of the integrated energy system, and to transform the system carbon emission into a low-carbon economic indicator, a two-layer capacity optimization allocation model is established with the objectives of the investment, operation, and maintenance cost and the operation cost, respectively. For the source-load uncertainty, the scenario reduction theory based on Monte Carlo simulation and Wasserstein distance is used to obtain the per-unit value of wind and photovoltaic output, and the K-means clustering method is used to obtain the typical day of electric-heat-cold multi-energy load. Based on the geometric Brownian motion in finance to simulate the feed-in tariffs under different volatilities, the multidimensional analysis scenarios are constructed according to different combinations of carbon emission reduction policies and tariff volatilities. The model is solved using the non-dominated sorting genetic algorithm (NSGA-II) with mixed integer linear programming (MILP) method. Case study results show that under the optimal scenario considering policy interaction and price volatility (δ = 1.0), the total annual operating cost is reduced by approximately 17.9% (from 2.80 million CNY to 2.30 million CNY) compared to the baseline with no carbon policy. The levelized cost of the energy system reaches 0.2042 CNY/kWh, and carbon-green certificate trading synergies contribute about 70% of the operational cost reduction. The findings demonstrate that carbon reduction policies and electricity price volatility significantly affect system configuration and operational economy, providing a new perspective and decision-making basis for integrated energy system planning. Full article
Show Figures

Figure 1

Back to TopTop