Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (136)

Search Parameters:
Keywords = probabilistic embedding

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1294 KB  
Article
Event-Driven Spatiotemporal Computing for Robust Flight Arrival Time Prediction: A Probabilistic Spiking Transformer Approach
by Quanquan Chen and Meilong Le
Aerospace 2026, 13(2), 203; https://doi.org/10.3390/aerospace13020203 - 22 Feb 2026
Viewed by 58
Abstract
Precise Estimated Time of Arrival (ETA) prediction in Terminal Maneuvering Areas (TMA) constitutes a prerequisite for efficient arrival sequencing and airspace capacity management. While data-driven approaches outperform kinematic models, conventional Recurrent Neural Networks (RNNs) exhibit limitations in modeling complex multi-aircraft spatial interactions and [...] Read more.
Precise Estimated Time of Arrival (ETA) prediction in Terminal Maneuvering Areas (TMA) constitutes a prerequisite for efficient arrival sequencing and airspace capacity management. While data-driven approaches outperform kinematic models, conventional Recurrent Neural Networks (RNNs) exhibit limitations in modeling complex multi-aircraft spatial interactions and lack the capability to quantify predictive uncertainty. Conversely, Spiking Neural Networks (SNNs) enable energy-efficient event-driven computation, yet their applicability to continuous trajectory regression is hindered by “input starvation,” where normalized state vectors fail to induce sufficient neural firing rates. This study proposes a Probabilistic Spiking Transformer (PST) architecture to integrate neuromorphic sparsity with global attention mechanisms. An Adaptive Spiking Temporal Encoding mechanism incorporating learnable linear projections is introduced to resolve the regression-spiking incompatibility, facilitating the autonomous mapping of continuous trajectory dynamics into sparse spike trains without heuristic scaling. Concurrently, a Distance-Biased Multi-Aircraft Cross-Attention (MACA) module models air traffic conflicts by weighting spatial interactions according to physical proximity, thereby embedding separation constraints into the feature extraction process. Evaluation on large-scale real-world ADS-B datasets demonstrates that the PST yields a Mean Absolute Error (MAE) of 49.27 s, representing a 60% error reduction relative to standard LSTM baselines. Furthermore, the model generates well-calibrated probabilistic distributions (Prediction Interval Coverage Probability > 94%), offering quantifiable uncertainty metrics for risk-based decision support while ensuring real-time inference suitable for operational deployment. Full article
(This article belongs to the Section Air Traffic and Transportation)
Show Figures

Figure 1

17 pages, 2000 KB  
Article
Probabilistic Bird Trajectory Forecasting with Heavy-Tailed Uncertainty Modeling for Low-Altitude Airspace Monitoring
by Feiyang Song, Zhonghe Liu, Yuyang Zhao and Jingguo Zhu
Sensors 2026, 26(4), 1270; https://doi.org/10.3390/s26041270 - 15 Feb 2026
Viewed by 265
Abstract
The low-altitude airspace of bird flocks is gradually shared by unmanned aerial vehicles (UAVs), posing safety risks that necessitate accurate trajectory forecasting. However, existing vision-based methods often treat trajectory prediction and UAV detection as separate tasks, assume light-tailed Gaussian noise, and rely on [...] Read more.
The low-altitude airspace of bird flocks is gradually shared by unmanned aerial vehicles (UAVs), posing safety risks that necessitate accurate trajectory forecasting. However, existing vision-based methods often treat trajectory prediction and UAV detection as separate tasks, assume light-tailed Gaussian noise, and rely on heavy backbones. These limitations, when applied to bird trajectory forecasting, limit uncertainty calibration and embedded deployment in ground-based monocular surveillance. In this work, we propose a unified framework for low-altitude monitoring. Its core, Mini-BirdFormer, combines a lightweight Transformer encoder with a Student-t mixture density head to model heavy-tailed flight dynamics and produce calibrated uncertainty. Experiments on a real-world dataset show the model achieves strong long-horizon performance with only 1.05 million parameters, attaining a minADE of 0.785 m and reducing negative log-likelihood from 1.25 to −2.01 (lower is better) compared with a Gaussian Long Short-Term Memory (LSTM) baseline. Crucially, it enables low-latency inference on resource-constrained platforms at 616 FPS. Additionally, a system-level extension supports zero-shot UAV detection via open-vocabulary learning, attaining 92% recall without false alarms. Results demonstrate that combining heavy-tailed probabilistic modeling with a compact backbone provides a practical, deployable approach for monitoring shared airspace. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

29 pages, 3196 KB  
Review
The Remote Sensing Geostatistical Paradigm: A Review of Key Technologies and Applications
by Junyu He
Remote Sens. 2026, 18(4), 600; https://doi.org/10.3390/rs18040600 - 14 Feb 2026
Viewed by 139
Abstract
Advancements in earth observation technologies are ushering in the big data era, yet this potential is compromised by intrinsic challenges: inherent uncertainty, spatiotemporal heterogeneity, multi-scale character, and pervasive data gaps. Traditional methods often fail to address these issues within a single, coherent system. [...] Read more.
Advancements in earth observation technologies are ushering in the big data era, yet this potential is compromised by intrinsic challenges: inherent uncertainty, spatiotemporal heterogeneity, multi-scale character, and pervasive data gaps. Traditional methods often fail to address these issues within a single, coherent system. The main contributions of this review are to systematically establish the Remote Sensing Geostatistical Paradigm (RSGP) as a comprehensive, unified framework. Powered by its core theory, Bayesian Maximum Entropy (BME), RSGP is a broadly designed epistemic framework that transcends a mere conceptual reorganization of established methods. It addresses the above challenges by highlighting two pivotal concepts within a spatiotemporal random field: (1) uncertainty quantification via probabilistic soft data, which redefines observations as probability density functions, representing a fundamental epistemological shift from deterministic scalars to probabilistic entities, and provides a universal interface for rigorous assimilation of heterogeneous remote sensing or in situ observations and synergy with other computational models, such as machine learning; and (2) spatiotemporal structure exploitation, which integrates the underlying structure embedded in remote sensing data of natural attributes, moving beyond mere optical properties to incorporate a broader range of available spatiotemporal information, for robust estimation and mapping purposes. Furthermore, the evolution of key technologies is illustrated by using real-world application cases, guiding how to implement RSGP in terms of different scenarios. Finally, the paradigm’s features and limitations are discussed. This synthesis provides the remote sensing community with a robust foundation for uncertainty-aware analysis and multi-source integration, bridging geostatistical logic with next-generation AI-driven Earth observation. Full article
(This article belongs to the Section Remote Sensing for Geospatial Science)
Show Figures

Figure 1

14 pages, 15601 KB  
Article
Hardware-Efficient Stochastic Computing-Based Neural Networks with SNN-Isomorphic LIF Activation
by Jiho Kim, Kaeun Lim and Youngmin Kim
Electronics 2026, 15(4), 768; https://doi.org/10.3390/electronics15040768 - 11 Feb 2026
Viewed by 169
Abstract
Recent advances in artificial intelligence have made power efficiency a primary objective in system design. In this context, stochastic computing (SC), which processes probabilistic bitstreams using simple logic, and spiking neural networks (SNNs), a neuromorphic paradigm, have gained prominence as alternative approaches. This [...] Read more.
Recent advances in artificial intelligence have made power efficiency a primary objective in system design. In this context, stochastic computing (SC), which processes probabilistic bitstreams using simple logic, and spiking neural networks (SNNs), a neuromorphic paradigm, have gained prominence as alternative approaches. This study proposes a Stochastic Computing Neural Network (SC-NN) framework that minimizes the intrinsic errors of stochastic computing and leverages the isomorphism between one-count operations on bitstreams and spike-rate computations in spiking neural networks, yielding improvements in accuracy and hardware efficiency. In contrast to earlier studies that utilized independent random number sequences of 10 bits or higher, our study employed a practically implementable 8-bit linear feedback shift Register (LFSR)-based pseudo-random bitstream. Using 4 taps and 255 seeds improves the realism of the hardware. Despite the inherent accuracy ceiling of pseudo-random sequences, the proposed method achieves higher accuracy. Applied to an 8-bit SC-based neural network accelerator, the proposed design improves accuracy by 35% over a conventional FSM baseline, while reducing power and area by 43.8% and 17.2%, respectively, and decreasing delay by 5.5%. These improvements translate to a 2.3× enhancement in the Figure of Merit (FoM), which was further verified through physical layout and FPGA results. Overall, this work introduces a new paradigm that enables simultaneous gains in accuracy and efficiency for low-power AI by suppressing the error sources and embedding the structural similarity between SNNs and SC into the design. Full article
(This article belongs to the Special Issue Design of Low-Power Circuits and Systems)
Show Figures

Figure 1

17 pages, 608 KB  
Article
Physics-Informed Bayesian Inference for Virtual Testing and Prediction of Train Performance
by Kian Sepahvand, Christoph Schwarz, Oliver Urspruch and Frank Guenther
Machines 2026, 14(2), 211; https://doi.org/10.3390/machines14020211 - 11 Feb 2026
Viewed by 193
Abstract
This paper proposes a physics-informed Bayesian framework for virtual testing and predictive modeling of train performance, specifically addressing stopping-distance prediction. The approach unifies physical simulation models with data-driven statistical inference to achieve uncertainty-aware predictions under limited or noisy measurements. By embedding governing equations [...] Read more.
This paper proposes a physics-informed Bayesian framework for virtual testing and predictive modeling of train performance, specifically addressing stopping-distance prediction. The approach unifies physical simulation models with data-driven statistical inference to achieve uncertainty-aware predictions under limited or noisy measurements. By embedding governing equations of motion into a hierarchical Bayesian structure, the method systematically accounts for both model-form and data uncertainty, allowing explicit decomposition into aleatoric and epistemic components. A Gaussian process surrogate is employed to efficiently emulate high-fidelity physics simulations while preserving key dynamic behaviors and parameter sensitivities. The Bayesian formulation enables probabilistic calibration and validation, providing predictive distributions and confidence bounds. As a representative application, the framework is applied to the virtual prediction of train stopping distances, demonstrating how the proposed methodology captures nonlinear braking dynamics and quantifies uncertainty in safety-relevant performance metrics directly compatible with statistical verification standards such as EN 16834. The results confirm that the physics-informed Bayesian approach enables accurate, interpretable, and standards-aligned virtual testing across a wide range of dynamical systems. Full article
(This article belongs to the Special Issue Artificial Intelligence in Rail Transportation)
Show Figures

Figure 1

40 pages, 1957 KB  
Article
A Multiple-Objective Memetic Algorithm for the Energy- Efficient Scheduling of Distributed Assembly Flow Shops
by Ruiheng Sun, Hongbo Song, Yourong Chen, Xudong Zhang, Liyuan Liu, Jian Lin and Yulong Cui
Symmetry 2026, 18(2), 315; https://doi.org/10.3390/sym18020315 - 9 Feb 2026
Viewed by 166
Abstract
In this paper, a Multiple-Objective Memetic Algorithm (MOMA) is proposed to address the Energy-Efficient Distributed Assembly Permutation Flow-Shop Scheduling Problem (EEDAPFSP) by explicitly exploiting the structural and objective symmetries inherent in the scheduling process, with the dual objectives of minimizing the maximum completion [...] Read more.
In this paper, a Multiple-Objective Memetic Algorithm (MOMA) is proposed to address the Energy-Efficient Distributed Assembly Permutation Flow-Shop Scheduling Problem (EEDAPFSP) by explicitly exploiting the structural and objective symmetries inherent in the scheduling process, with the dual objectives of minimizing the maximum completion time (makespan) and total energy consumption (TEC). The EEDAPFSP is a complex NP-hard optimization problem in modern sustainable manufacturing that balances production efficiency and environmental sustainability. During the global search phase, a symmetry-preserving dual-search framework is constructed, in which diverse and potential regions in the solution space are explored by symmetrically generating time-dominant product sub-sequences (TDPSs) and energy-dominant product sub-sequences (EDPSs) in the individuals of each iteration, enabling complementary exploration from time- and energy-oriented perspectives. This is accomplished through the incorporation of a variable-weight metric technique and a first product fixed strategy into an estimation distributed algorithm-based hyper-heuristic (EDAHH), so as to maintain a balanced and symmetric probabilistic modeling of decision patterns with respect to the makespan and energy consumption. In the local search phase, two problem-specific designed neighborhood structures are proposed to refine the job sequences corresponding to the TDPS and EDPS in the superior sub-population, effectively reducing both the makespan and TEC. A box-level ε dominance technique based on the crowding distance is proposed for Pareto archive updating. Additionally, an energy-saving strategy is embedded throughout the algorithm, incorporating three mechanisms—job processing delay, machine shutdown and restart control, and speed regulation—to further optimize TEC during both the global and local search phases. Finally, extensive computational experiments are carried out, and the results demonstrate that the MOMA achieves significantly better performance in terms of the inverted generational distance (IGD) and the quality metric ρ compared with state-of-the-art algorithms. The resulting Pareto front of non-dominated solutions provides a comprehensive set of trade-offs between energy consumption and the makespan, offering decision makers flexible and efficient scheduling options. Full article
(This article belongs to the Special Issue Symmetry in Computing Algorithms and Applications)
Show Figures

Figure 1

23 pages, 6708 KB  
Article
Feasibility Domain Construction and Characterization Method for Intelligent Underground Mining Equipment Integrating ORB-SLAM3 and Depth Vision
by Siya Sun, Xiaotong Han, Hongwei Ma, Haining Yuan, Sirui Mao, Chuanwei Wang, Kexiang Ma, Yifeng Guo and Hao Su
Sensors 2026, 26(3), 966; https://doi.org/10.3390/s26030966 - 2 Feb 2026
Viewed by 245
Abstract
To address the limited environmental perception capability and the difficulty of achieving consistent and efficient representation of the workspace feasible domain caused by high dust concentration, uneven illumination, and enclosed spaces in underground coal mines, this paper proposes a digital spatial construction and [...] Read more.
To address the limited environmental perception capability and the difficulty of achieving consistent and efficient representation of the workspace feasible domain caused by high dust concentration, uneven illumination, and enclosed spaces in underground coal mines, this paper proposes a digital spatial construction and representation method for underground environments by integrating RGB-D depth vision with ORB-SLAM3. First, a ChArUco calibration board with embedded ArUco markers is adopted to perform high-precision calibration of the RGB-D camera, improving the reliability of geometric parameters under weak-texture and non-uniform lighting conditions. On this basis, a “dense–sparse cooperative” OAK-DenseMapper Pro module is further developed; the module improves point-cloud generation using a mathematical projection model, and combines enhanced stereo matching with multi-stage depth filtering to achieve high-quality dense point-cloud reconstruction from RGB-D observations. The dense point cloud is then converted into a probabilistic octree occupancy map, where voxel-wise incremental updates are performed for observed space while unknown regions are retained, enabling a memory-efficient and scalable 3D feasible-space representation. Experiments are conducted in multiple representative coal-mine tunnel scenarios; compared with the original ORB-SLAM3, the number of points in dense mapping increases by approximately 38% on average; in trajectory evaluation on the TUM dataset, the root mean square error, mean error, and median error of the absolute pose error are reduced by 7.7%, 7.1%, and 10%, respectively; after converting the dense point cloud to an octree, the map memory footprint is only about 0.5% of the original point cloud, with a single conversion time of approximately 0.75 s. The experimental results demonstrate that, while ensuring accuracy, the proposed method achieves real-time, efficient, and consistent representation of the 3D feasible domain in complex underground environments, providing a reliable digital spatial foundation for path planning, safe obstacle avoidance, and autonomous operation. Full article
Show Figures

Figure 1

15 pages, 1158 KB  
Article
Application of Probabilistic Genotyping Software to Paternity Cases Involving Low-Template DNA
by Alessia Riem, Elena Chierto, Federica Bertolotto, Marco Parnigoni, Serena Aneli and Carlo Robino
Genes 2026, 17(2), 187; https://doi.org/10.3390/genes17020187 - 1 Feb 2026
Viewed by 349
Abstract
Background: Interpreting short tandem repeat (STR) profiles from low-template DNA (LT-DNA) requires consideration of the stochastic phenomena that can affect the reliability of genotypes. Although several probabilistic genotyping tools have been developed to model such uncertainties, most have only been used for direct [...] Read more.
Background: Interpreting short tandem repeat (STR) profiles from low-template DNA (LT-DNA) requires consideration of the stochastic phenomena that can affect the reliability of genotypes. Although several probabilistic genotyping tools have been developed to model such uncertainties, most have only been used for direct comparisons between persons of interest and crime scene samples. Their application to kinship testing involving LT-DNA has received comparatively little attention. Methods: We evaluated the performance of two PGS, EuroForMix (EFM) and EFMrep, which support alternative hypotheses with relatedness, by comparing them with a standard paternity testing software (Familias) in 33 paternity cases involving LT-DNA samples categorised as ‘mildly’ (MD) or ‘highly’ (HD) degraded based on the quality of the STR profiles. The samples included formalin-fixed paraffin-embedded tissues, bone specimens, and stains collected from personal items. Pedigrees with (‘trio’) and without (‘duo’) maternal information were considered. Results: In MD and HD duos, the likelihood ratios (LRs) obtained with EFMrep were significantly higher compared to other software. In trios, Familias produced significantly higher LRs than PGS for MD samples, whereas the three software performed comparably for HD samples. Notably, in HD trios, EFMrep was the software most likely to maximise LR values, which were above 10,000 in 60% of the cases, compared to 50% of EFM and 40% of Familias. Conclusions: These findings provide preliminary evidence of the potential and limitations of using PGS for kinship assessments involving LT-DNA specimens. Full article
(This article belongs to the Section Molecular Genetics and Genomics)
Show Figures

Figure 1

33 pages, 3882 KB  
Article
Hybrid Feature Selection and Interpretable Random Forest Modeling for Olympic Medal Forecasting: Integrating CFO Optimization and Uncertainty Analysis
by Xinran Chen, Xuming Yan and Tanran Zhang
Mathematics 2026, 14(3), 478; https://doi.org/10.3390/math14030478 - 29 Jan 2026
Viewed by 402
Abstract
This study develops a data-driven predictive framework integrating hybrid feature selection, interpretable machine learning, and uncertainty quantification to forecast Olympic medal performance among elite nations. Focusing on the top ten countries from Paris 2024, the analysis employs a three-stage feature selection procedure combining [...] Read more.
This study develops a data-driven predictive framework integrating hybrid feature selection, interpretable machine learning, and uncertainty quantification to forecast Olympic medal performance among elite nations. Focusing on the top ten countries from Paris 2024, the analysis employs a three-stage feature selection procedure combining Spearman correlation screening, random forest embedded importance, and the Caterpillar Fungus Optimizer (CFO) to identify stable long-term predictors. A novel test variable, rank, capturing historical competitive strength, and a refined continuous host-effect indicator derived from gravity-type trade models are introduced. Two complementary modeling strategies—a two-way fixed-effects econometric model and a CFO-optimized random forest—are implemented and validated. SHAP, LIME, and partial dependence plots enhance model interpretability, revealing nonlinear mechanisms underlying medal outcomes. Kernel density estimation generates probabilistic interval forecasts for Los Angeles 2028. Results demonstrate that historical performance and event-specific characteristics dominate medal predictions, while macroeconomic factors (GDP, population) and conventional host status contribute marginally once related variables are controlled. Consistent variable rankings across models and close alignment between 2028 projections and 2024 outcomes validate the framework’s robustness and practical applicability for sports policy and resource allocation decisions. Full article
Show Figures

Figure 1

21 pages, 1284 KB  
Article
Probabilistic Indoor 3D Object Detection from RGB-D via Gaussian Distribution Estimation
by Hyeong-Geun Kim
Mathematics 2026, 14(3), 421; https://doi.org/10.3390/math14030421 - 26 Jan 2026
Viewed by 240
Abstract
Conventional object detectors represent each object by a deterministic bounding box, regressing its center and size from RGB images. However, such discrete parameterization ignores the inherent uncertainty in object appearance and geometric projection, which can be more naturally modeled as a probabilistic density [...] Read more.
Conventional object detectors represent each object by a deterministic bounding box, regressing its center and size from RGB images. However, such discrete parameterization ignores the inherent uncertainty in object appearance and geometric projection, which can be more naturally modeled as a probabilistic density field. Recent works have introduced Gaussian-based formulations that treat objects as distributions rather than boxes, yet they remain limited to 2D images or require late fusion between image and depth modalities. In this paper, we propose a unified Gaussian-based framework for direct 3D object detection from RGB-D inputs. Our method is built upon a vision transformer backbone to effectively capture global context. Instead of separately embedding RGB and depth features or refining depth within region proposals, our method takes a full four-channel RGB-D tensor and predicts the mean and covariance of a 3D Gaussian distribution for each object in a single forward pass. We extend a pretrained vision transformer to accept four-channel inputs by augmenting the patch embedding layer while preserving ImageNet-learned representations. This formulation allows the detector to represent both object location and geometric uncertainty in 3D space. By optimizing divergence metrics such as the Kullback–Leibler or Bhattacharyya distances between predicted and target distributions, the network learns a physically consistent probabilistic representation of objects. Experimental results on the SUN RGB-D benchmark demonstrate that our approach achieves competitive performance compared to state-of-the-art point-cloud-based methods while offering uncertainty-aware and geometrically interpretable 3D detections. Full article
Show Figures

Figure 1

25 pages, 681 KB  
Systematic Review
A Systematic Review of Topic Modeling Techniques for Electronic Health Records
by Iqra Mehmood, Zoya Zahra, Sarah Iqbal, Ayman Qahmash and Ijaz Hussain
Healthcare 2026, 14(2), 282; https://doi.org/10.3390/healthcare14020282 - 22 Jan 2026
Viewed by 404
Abstract
Background: Electronic Health Records (EHRs) are a rich source of clinical information used for patient monitoring, disease progression analysis, and treatment outcome assessment. However, their large-scale, heterogeneity, and temporal characteristics make them difficult to analyze. Topic modeling has emerged as an effective [...] Read more.
Background: Electronic Health Records (EHRs) are a rich source of clinical information used for patient monitoring, disease progression analysis, and treatment outcome assessment. However, their large-scale, heterogeneity, and temporal characteristics make them difficult to analyze. Topic modeling has emerged as an effective method to extract latent structures, detect disease characteristics, and trace patient trajectories in EHRs. Recent neural and transformer-based approaches such as BERTopic has significantly improved coherence, scalability, and domain adaptability compared to earlier probabilistic models. Methods: This Systematic Literature Review (SLR) examines topic modeling and its variants applied to EHR data over the past decade. We follow the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework to identify, screen, and select relevant studies. The reviewed techniques span traditional probabilistic models, neural embedding-based methods, and temporal extensions designed for pathway and sequence modeling in clinical data. Results: The synthesis covers trends in publication patterns, dataset usage, application domains, and methodological contributions. The reviewed literature demonstrates strengths across different modeling families, while also highlighting challenges related to scalability, interpretability, temporal complexity, and privacy when analyzing large-scale EHRs. Conclusions: Topic modeling continues to play a central role in understanding temporal patterns and latent structures in EHRs. This review also outlines future possibilities for integrating topic modeling with Agentic AI and large language models to enhance clinical decision-making. Overall, this SLR provides researchers and practitioners with a consolidated foundation on temporal topic modeling in EHRs and its potential to advance data-driven healthcare. Full article
(This article belongs to the Special Issue AI-Driven Healthcare Insights)
Show Figures

Figure 1

16 pages, 3906 KB  
Article
S3PM: Entropy-Regularized Path Planning for Autonomous Mobile Robots in Dense 3D Point Clouds of Unstructured Environments
by Artem Sazonov, Oleksii Kuchkin, Irina Cherepanska and Arūnas Lipnickas
Sensors 2026, 26(2), 731; https://doi.org/10.3390/s26020731 - 21 Jan 2026
Viewed by 275
Abstract
Autonomous navigation in cluttered and dynamic industrial environments remains a major challenge for mobile robots. Traditional occupancy-grid and geometric planning approaches often struggle in such unstructured settings due to partial observability, sensor noise, and the frequent presence of moving agents (machinery, vehicles, humans). [...] Read more.
Autonomous navigation in cluttered and dynamic industrial environments remains a major challenge for mobile robots. Traditional occupancy-grid and geometric planning approaches often struggle in such unstructured settings due to partial observability, sensor noise, and the frequent presence of moving agents (machinery, vehicles, humans). These limitations seriously undermine long-term reliability and safety compliance—both essential for Industry 4.0 applications. This paper introduces S3PM, a lightweight entropy-regularized framework for simultaneous mapping and path planning that operates directly on dense 3D point clouds. Its key innovation is a dynamics-aware entropy field that fuses per-voxel occupancy probabilities with motion cues derived from residual optical flow. Each voxel is assigned a risk-weighted entropy score that accounts for both geometric uncertainty and predicted object dynamics. This representation enables (i) robust differentiation between reliable free space and ambiguous/hazardous regions, (ii) proactive collision avoidance, and (iii) real-time trajectory replanning. The resulting multi-objective cost function effectively balances path length, smoothness, safety margins, and expected information gain, while maintaining high computational efficiency through voxel hashing and incremental distance transforms. Extensive experiments in both real-world and simulated settings, conducted on a Raspberry Pi 5 (with and without the Hailo-8 NPU), show that S3PM achieves 18–27% higher IoU in static/dynamic segmentation, 0.94–0.97 AUC in motion detection, and 30–45% fewer collisions compared to OctoMap + RRT* and standard probabilistic baselines. The full pipeline runs at 12–15 Hz on the bare Pi 5 and 25–30 Hz with NPU acceleration, making S3PM highly suitable for deployment on resource-constrained embedded platforms. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing—2nd Edition)
Show Figures

Figure 1

23 pages, 5500 KB  
Article
Low-Damage Seismic Design Approach for a Long-Span Cable-Stayed Bridge in a High Seismic Hazard Zone: A Case Study of the New Panama Canal Bridge
by Zhenghao Xiao, Shan Huang, Sheng Li, Minghua Li and Yao Hu
Buildings 2026, 16(2), 428; https://doi.org/10.3390/buildings16020428 - 20 Jan 2026
Viewed by 273
Abstract
Designing long-span cable-stayed bridges in high seismic hazard zones presents significant challenges due to their flexible structural systems, the influence of multi-support excitation, and the need to control large displacements while limiting seismic demands on critical components. These difficulties are further amplified in [...] Read more.
Designing long-span cable-stayed bridges in high seismic hazard zones presents significant challenges due to their flexible structural systems, the influence of multi-support excitation, and the need to control large displacements while limiting seismic demands on critical components. These difficulties are further amplified in regions with complex geology and for bridges required to maintain high levels of post-earthquake serviceability. This study develops a low-damage seismic design approach for long-span cable-stayed bridges and demonstrates its application in the New Panama Canal Bridge. Probabilistic seismic hazard assessment and site response analyses are performed to generate spatially varying ground motions at the pylons and side piers. The pylons adopt a reinforced concrete configuration with embedded steel stiffeners for anchorage, forming a composite zone capable of efficiently transferring concentrated stay-cable forces. The lightweight main girder consists of a lattice-type steel framework connected to a high-strength reinforced concrete deck slab, providing both rigidity and structural efficiency. A coordinated girder–pylon restraint system—comprising vertical bearings, fuse-type restrainers, and viscous dampers—ensures controlled stiffness and effective energy dissipation. Nonlinear seismic analyses show that displacements of the girder remain well controlled under the Safety Evaluation Earthquake, and the dampers and bearings exhibit stable hysteretic behaviours. Cable tensions remain within 500–850 MPa, meeting minimal-damage performance criteria. Overall, the results demonstrate that low-damage seismic performance targets are achievable and that the proposed design approach enhances structural control and seismic resilience in long-span cable-stayed bridges. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

36 pages, 3003 KB  
Article
A Modified Artificial Protozoa Optimizer for Robust Parameter Identification in Nonlinear Dynamic Systems
by Davut Izci, Serdar Ekinci, Gökhan Yüksek, Mostafa Rashdan, Burcu Bektaş Güneş, Muhammet İsmail Güngör and Mohammad Salman
Biomimetics 2026, 11(1), 65; https://doi.org/10.3390/biomimetics11010065 - 12 Jan 2026
Viewed by 397
Abstract
Accurate parameter identification in nonlinear and chaotic dynamic systems requires optimization algorithms that can reliably balance global exploration and local refinement in complex, multimodal search landscapes. To address this challenge, a modified artificial protozoa optimizer (mAPO) is developed in this study by embedding [...] Read more.
Accurate parameter identification in nonlinear and chaotic dynamic systems requires optimization algorithms that can reliably balance global exploration and local refinement in complex, multimodal search landscapes. To address this challenge, a modified artificial protozoa optimizer (mAPO) is developed in this study by embedding two complementary mechanisms into the original artificial protozoa optimizer: a probabilistic random learning strategy to enhance population diversity and global search capability, and a Nelder–Mead simplex-based local refinement stage to improve exploitation and fine-scale solution adjustment. The general optimization performance and scalability of the proposed framework are first evaluated using the CEC2017 benchmark suite. Statistical analyses conducted over shifted and rotated, hybrid, and composition functions demonstrate that mAPO achieves improved mean performance and reduced variability compared with the original APO, indicating enhanced robustness in high-dimensional and complex optimization problems. The effectiveness of mAPO is then examined in nonlinear system identification applications involving chaotic dynamics. Offline and online parameter identification experiments are performed on the Rössler chaotic system and a permanent magnet synchronous motor, including scenarios with abrupt parameter variations. Comparative simulations against APO and several state-of-the-art optimizers show that mAPO consistently yields smaller objective function values, more accurate parameter estimates, and superior statistical stability. In the PMSM case, exact parameter reconstruction with zero error is achieved across all independent runs, while rapid and smooth convergence is observed under both static and time-varying conditions. Full article
(This article belongs to the Section Biological Optimisation and Management)
Show Figures

Figure 1

37 pages, 1355 KB  
Review
Risk Assessment of Chemical Mixtures in Foods: A Comprehensive Methodological and Regulatory Review
by Rosana González Combarros, Mariano González-García, Gerardo David Blanco-Díaz, Kharla Segovia Bravo, José Luis Reino Moya and José Ignacio López-Sánchez
Foods 2026, 15(2), 244; https://doi.org/10.3390/foods15020244 - 9 Jan 2026
Viewed by 461
Abstract
Over the last 15 years, mixture risk assessment for food xenobiotics has evolved from conceptual discussions and simple screening tools, such as the Hazard Index (HI), towards operational, component-based and probabilistic frameworks embedded in major food-safety institutions. This review synthesizes methodological and regulatory [...] Read more.
Over the last 15 years, mixture risk assessment for food xenobiotics has evolved from conceptual discussions and simple screening tools, such as the Hazard Index (HI), towards operational, component-based and probabilistic frameworks embedded in major food-safety institutions. This review synthesizes methodological and regulatory advances in cumulative risk assessment for dietary “cocktails” of pesticides, contaminants and other xenobiotics, with a specific focus on food-relevant exposure scenarios. At the toxicological level, the field is now anchored in concentration/dose addition as the default model for similarly acting chemicals, supported by extensive experimental evidence that most environmental mixtures behave approximately dose-additively at low effect levels. Building on this paradigm, a portfolio of quantitative metrics has been developed to operationalize component-based mixture assessment: HI as a conservative screening anchor; Relative Potency Factors (RPF) and Toxic Equivalents (TEQ) to express doses within cumulative assessment groups; the Maximum Cumulative Ratio (MCR) to diagnose whether risk is dominated by one or several components; and the combined Margin of Exposure (MOET) as a point-of-departure-based integrator that avoids compounding uncertainty factors. Regulatory frameworks developed by EFSA, the U.S. EPA and FAO/WHO converge on tiered assessment schemes, biologically informed grouping of chemicals and dose addition as the default model for similarly acting substances, while differing in scope, data infrastructure and legal embedding. Implementation in food safety critically depends on robust exposure data streams. Total Diet Studies provide population-level, “as eaten” exposure estimates through harmonized food-list construction, home-style preparation and composite sampling, and are increasingly combined with conventional monitoring. In parallel, human biomonitoring quantifies internal exposure to diet-related xenobiotics such as PFAS, phthalates, bisphenols and mycotoxins, embedding mixture assessment within a dietary-exposome perspective. Across these developments, structured uncertainty analysis and decision-oriented communication have become indispensable. By integrating advances in toxicology, exposure science and regulatory practice, this review outlines a coherent, tiered and uncertainty-aware framework for assessing real-world dietary mixtures of xenobiotics, and identifies priorities for future work, including mechanistically and data-driven grouping strategies, expanded use of physiologically based pharmacokinetic modelling and refined mixture-sensitive indicators to support public-health decision-making. Full article
(This article belongs to the Special Issue Research on Food Chemical Safety)
Show Figures

Figure 1

Back to TopTop