Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,891)

Search Parameters:
Keywords = robust iteration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 5944 KB  
Article
Data-Driven Process FMEA for Flexible Manufacturing Systems: Framework and Industrial Case Study
by Dobri Komarski, Velizar Vassilev, Stiliyan Nikolov, Reneta Dimitrova and Slav Dimitrov
Appl. Sci. 2026, 16(8), 3760; https://doi.org/10.3390/app16083760 (registering DOI) - 11 Apr 2026
Abstract
Flexible automated assembly lines (FAALs) in Industry 4.0 require robust quality management that integrates operational data with systematic risk analysis. However, Process Failure Mode and Effects Analysis (PFMEA) documents are often developed during the design phase and not systematically updated with actual production [...] Read more.
Flexible automated assembly lines (FAALs) in Industry 4.0 require robust quality management that integrates operational data with systematic risk analysis. However, Process Failure Mode and Effects Analysis (PFMEA) documents are often developed during the design phase and not systematically updated with actual production data, leading to a gap between formal risk assessment and operational reality. This study addresses this gap by developing and validating an integrated data-driven framework that combines classical quality tools (process flow charts, check sheets, cause-and-effect diagrams, and Pareto analysis) with data-driven PFMEA, creating traceable links from operational logs to risk ratings. While individual quality tools are well-established, the core contribution of this work is a structured data transformation pipeline that creates traceable, auditable linkages from raw operational event logs to calibrated PFMEA ratings with quantified uncertainty—a combination not previously demonstrated for flexible assembly systems. The framework was applied to FMS-200, a modular FAAL for bearing units, consisting of eight stations and a common transfer system. Analysis of 186 failure events across 2743 assembly cycles, including 18 product configurations, identified 40 distinct failure modes with risk priority number (RPN) values ranging from 60 to 378, revealing that approximately 90% of the aggregated risk is associated with pneumatic systems. Monte Carlo uncertainty analysis (10,000 iterations) demonstrated robust rank stability, with the top five failure modes maintaining their relative ordering in over 90% of simulations. The framework provides production and quality managers with a systematic methodology to maintain PFMEA relevance through continuous data integration, enabling evidence-based prioritization of improvement actions. Full article
16 pages, 1742 KB  
Article
Construction of a Nomogram Prediction Model for Mortality Risk Within 14 Days in Patients with Acute Myocardial Infarction and Ventricular Septal Rupture
by Jie Luo, Ben Huang, Hao-Yu Ruan, Du-Jiang Xie, Gao-Feng Wang, Lei Zhou, Ling Zhou and Shao-Liang Chen
J. Clin. Med. 2026, 15(8), 2919; https://doi.org/10.3390/jcm15082919 (registering DOI) - 11 Apr 2026
Abstract
Objective: This study aimed to develop a nomogram prediction model for predicting 14-day in-hospital mortality in patients with acute myocardial infarction (AMI) and ventricular septal rupture (VSR). Methods: Clinical data of 86 hospitalized patients (44 survivors and 42 non-survivors within 14 days) were [...] Read more.
Objective: This study aimed to develop a nomogram prediction model for predicting 14-day in-hospital mortality in patients with acute myocardial infarction (AMI) and ventricular septal rupture (VSR). Methods: Clinical data of 86 hospitalized patients (44 survivors and 42 non-survivors within 14 days) were retrospectively collected in Nanjing First Hospital from 1 March 2015 to 7 August 2025. Lasso regression and multivariable logistic regression were used to identify predictors, which were subsequently incorporated into the nomogram development. The model performance was assessed using area under the receiver operating characteristic curve (AUC), calibration plots, decision curve analysis (DCA), and clinical impact curves, with internal validation via 1000 bootstrap resamples. Results: Analysis of lasso regression and multivariable logistic regression analysis identified WBC count (OR = 1.31, 95% CI: 1.01–1.28, p = 0.040), D-dimer level (OR = 1.18, 95% CI: 1.01–1.38, p = 0.043), early revascularization (OR = 0.22, 95% CI: 0.06–0.88, p = 0.032), ventilatory support (OR = 3.48, 95% CI: 1.07–11.29, p = 0.038), and infection (OR = 3.97, 95% CI: 1.02–15.42, p = 0.047) as independent predictors of 14-day mortality for patients. Based on the results, a prediction nomogram model was constructed. The model achieved an area under the receiver operating characteristic curve (AUC) of 0.866 (95% CI: 0.785–0.946), with sensitivity of 0.857 (95% CI: 0.751–0.963) and specificity of 0.818 (95% CI: 0.704–0.932). Calibration plots demonstrated acceptable agreement between predicted and observed probabilities; decision curve analysis (DCA) and clinical impact curve further confirmed its net benefit and clinical utility. By 1000 bootstrap resampling iterations, the model demonstrated an apparent AUC of 0.864, 95% CI: 0.776–0.938, confirming reasonable discriminative performance. Conclusions: In summary, this study developed a clinical interpretable nomogram to estimate short-term (14-day) in-hospital mortality risk in patients with AMI-VSR; it provides a robust and interpretable tool for predicting short-term in-hospital mortality. Full article
(This article belongs to the Special Issue Acute Myocardial Infarction: Diagnosis, Treatment, and Rehabilitation)
Show Figures

Figure 1

16 pages, 4011 KB  
Article
Adaptive Multi-Order Penalty and Dual-Driven Weighting: aisPLS Algorithm for Raman Baseline Correction with Weak Peak Preservation
by Jiawei He, Yonglin Bai, Zishang Jv, Zhen Chen and Bo Wang
Molecules 2026, 31(8), 1243; https://doi.org/10.3390/molecules31081243 - 9 Apr 2026
Abstract
Baseline correction of Raman spectra is a critical step for achieving high-precision quantitative analysis. However, the presence of complex background noise, nonlinear baseline drift, and spectral peak distortion due to peak overlap in real spectral data severely limits the performance of conventional correction [...] Read more.
Baseline correction of Raman spectra is a critical step for achieving high-precision quantitative analysis. However, the presence of complex background noise, nonlinear baseline drift, and spectral peak distortion due to peak overlap in real spectral data severely limits the performance of conventional correction methods. To better preserve spectral details, this study proposes an improved penalized least squares method for Raman spectral baseline correction. Compared with common baseline correction approaches, the proposed method optimizes the iterative weight function through precise noise classification, significantly enhancing the algorithm’s flexibility. The traditional single smoothing parameter is extended into a smoothing vector, and a classification strategy consistent with that of the penalty parameter is adopted, enabling synchronous optimization and coordinated adjustment of both during iteration. Furthermore, based on the physical constraints of Raman spectra, the algorithm eliminates non-physical solutions that may arise in traditional iterative processes, ensuring the fidelity of the corrected spectra. Experimental results demonstrate that the proposed method exhibits strong robustness under various noise conditions and significantly improves correction accuracy. Full article
Show Figures

Figure 1

31 pages, 2759 KB  
Article
Uncertainty-Aware Groundwater Potential Mapping in Arid Basement Terrain Using AHP and Dirichlet-Based Monte Carlo Simulation: Evidence from the Sudanese Nubian Shield
by Mahmoud M. Kazem, Fadlelsaid A. Mohammed, Abazar M. A. Daoud and Tamás Buday
Water 2026, 18(8), 901; https://doi.org/10.3390/w18080901 - 9 Apr 2026
Abstract
Groundwater sustains human activity in arid crystalline terrains where surface water is scarce and hydrogeological data are limited. However, most groundwater potential mapping approaches depend on deterministic weighting methods without quantifying model variability. This study describes an uncertainty-aware Remote Sensing and Geographic Information [...] Read more.
Groundwater sustains human activity in arid crystalline terrains where surface water is scarce and hydrogeological data are limited. However, most groundwater potential mapping approaches depend on deterministic weighting methods without quantifying model variability. This study describes an uncertainty-aware Remote Sensing and Geographic Information Systems (RS–GIS) framework to delineate groundwater potential zones in the Wadi Arab Watershed, Northeastern Sudan. Nine thematic factors—geology and lithology, rainfall, slope, drainage density, lineament density, soil, land use/land cover, topographic wetness index, and height above nearest drainage—were integrated using the Analytical Hierarchy Process (AHP), with acceptable consistency (Consistency Ratio (CR) < 0.1). To address subjectivity in weights, a Dirichlet-based Monte Carlo simulation (500 iterations) was implemented to perturb AHP weights whilst preserving compositional constraints. The resulting Groundwater Potential Index (GWPI) classified 32.69% of the watershed as high to very high potential, primarily associated with alluvial deposits and fractured crystalline rocks. Model validation using Receiver Operating Characteristic (ROC) analysis yielded an Area Under the Curve (AUC) of 0.704, indicating acceptable predictive performance. Uncertainty assessment showed low spatial variability (mean standard deviation (SD) = 0.215) and stable exceedance probabilities, verifying the robustness of predicted high-potential zones. The proposed probabilistic AHP framework augments decision reliability and provides a transferable, cost-effective tool for groundwater planning in data-limited arid basement environments. Full article
(This article belongs to the Section Hydrogeology)
26 pages, 2531 KB  
Article
Underwater Acoustic Source DOA Estimation for Non-Uniform Circular Arrays Based on EMD and PWLS Correction
by Chuang Han, Boyuan Zheng and Tao Shen
Symmetry 2026, 18(4), 627; https://doi.org/10.3390/sym18040627 - 9 Apr 2026
Viewed by 85
Abstract
Uniform circular arrays (UCAs) are widely used in underwater source localization due to their omnidirectional coverage. However, random sensor position errors caused by installation inaccuracies and environmental disturbances convert UCAs into non-uniform circular arrays (NCAs), severely degrading the performance of high-resolution direction of [...] Read more.
Uniform circular arrays (UCAs) are widely used in underwater source localization due to their omnidirectional coverage. However, random sensor position errors caused by installation inaccuracies and environmental disturbances convert UCAs into non-uniform circular arrays (NCAs), severely degrading the performance of high-resolution direction of arrival (DOA) estimation algorithms. To address this issue, this paper proposes a robust DOA estimation method that integrates empirical mode decomposition (EMD) denoising with prior-weighted iterative least squares (PWLS) correction. The method first applies EMD to adaptively denoise received signals by selecting intrinsic mode functions based on a combined energy-correlation criterion. An initial DOA estimate is then obtained using the MUSIC algorithm. Finally, a PWLS correction algorithm leverages prior knowledge of deviated sensors to iteratively fit the circle center and gradually pull sensor positions toward the ideal circumference, using a differentiated relaxation mechanism to suppress outliers while preserving geometric features. Systematic Monte Carlo simulations compare five correction algorithms under multi-frequency and wideband signals. The results show that both multi-frequency and wideband signals reduce estimation errors to below 0.1°, with the proposed PWLS achieving the best accuracy under multi-frequency signals, while all algorithms approach zero error under wideband signals. The PWLS algorithm converges in about 10 iterations with high computational efficiency, providing a reliable solution for practical underwater NCA applications. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

34 pages, 5480 KB  
Article
Metaheuristic Optimization of Treated Sewage Wastewater Quality Parameters with Natural Coagulants
by Joseph K. Bwapwa and Jean G. Mukuna
Water 2026, 18(8), 885; https://doi.org/10.3390/w18080885 - 8 Apr 2026
Viewed by 173
Abstract
This study presents a comprehensive multi-objective optimization of sewage wastewater treatment using bio-based coagulants, guided by the Grey Wolf Optimizer (GWO) and its multi-objective variant (MOGWO). Experimental coagulation data, employing Citrullus lanatus and Cucumis melo as natural coagulants, were modeled using multivariate regression [...] Read more.
This study presents a comprehensive multi-objective optimization of sewage wastewater treatment using bio-based coagulants, guided by the Grey Wolf Optimizer (GWO) and its multi-objective variant (MOGWO). Experimental coagulation data, employing Citrullus lanatus and Cucumis melo as natural coagulants, were modeled using multivariate regression techniques, yielding high coefficients of determination (R2 > 0.95) across key water quality parameters. The optimization process targeted maximal reductions in turbidity, total suspended solids (TSS), biochemical oxygen demand (BOD), and chemical oxygen demand (COD) through strategic manipulation of pH and coagulant dosage. The single-objective GWO achieved significant outcomes, including a 96.68% turbidity reduction at pH 5 and 50 mg/L dosage. The MOGWO algorithm identified Pareto-optimal solutions, such as a 94.2% turbidity reduction at pH 5 and 72 mg/L dosage, and a balanced BOD reduction of 52.7% at pH 7. The predictive models indicated that optimal treatment conditions could reduce chemical usage by up to 90% compared to conventional coagulants, resulting in potential cost savings of up to 30%. Moreover, the algorithms demonstrated rapid convergence, averaging 200 iterations, highlighting their computational efficiency and robustness. These findings illustrate that integrating bio-based coagulants with advanced optimization techniques can achieve high treatment efficiency while reducing chemical inputs, thus directly supporting environmental sustainability by minimizing sludge and secondary pollution. In this situation, the wastewater treatment plant will focus on resource-recovery systems with less or no waste at the end of the treatment process. This approach aligns with circular economy principles by promoting eco-friendly, cost-effective wastewater treatment solutions suitable for resource-limited settings. The study offers a forward-looking pathway for environmentally responsible wastewater management practices that significantly reduce chemical dependency and contribute to pollution mitigation efforts. Full article
(This article belongs to the Section Wastewater Treatment and Reuse)
Show Figures

Figure 1

24 pages, 988 KB  
Article
An Improved Tracklet Generation Approach for Radar Maneuvering Target Tracking
by Songyao Dou, Ying Chen and Yaobing Lu
Electronics 2026, 15(7), 1538; https://doi.org/10.3390/electronics15071538 - 7 Apr 2026
Viewed by 249
Abstract
Aiming to improve radar multi-target tracking (MTT) accuracy and association performance in complex scenarios involving dense clutter, missed detections, and maneuvering targets, an improved tracklet generation approach based on the expectation–maximization (EM) framework is proposed in which data association variables and motion model [...] Read more.
Aiming to improve radar multi-target tracking (MTT) accuracy and association performance in complex scenarios involving dense clutter, missed detections, and maneuvering targets, an improved tracklet generation approach based on the expectation–maximization (EM) framework is proposed in which data association variables and motion model variables are jointly modeled as latent variables. These variables are estimated through iterative updates based on the loopy belief propagation (LBP) algorithm and the interacting multiple model (IMM) filtering and smoothing algorithms to generate high-confidence tracklets. Then, a delayed decision-making strategy based on the multi-hypothesis approach is employed to associate these tracklets into complete target trajectories. The resulting algorithm is named IMM-TrackletMHT. The performance of the IMM-TrackletMHT algorithm is evaluated and compared with several baseline algorithms in simulated scenarios under different clutter rates and detection probabilities. The simulation results demonstrate that the proposed algorithm consistently outperforms the baseline methods in terms of tracking accuracy, exhibits strong robustness to variations in the operating environment, and achieves higher computational efficiency in multi-scan measurement processing, thereby demonstrating the effectiveness and superiority of the proposed tracklet generation approach for maneuvering MTT. Full article
Show Figures

Figure 1

35 pages, 30864 KB  
Article
A Robot Path Planning Method Based on a Key Point Encoding Genetic Algorithm
by Chuanyu Yang, Zhenxue He, Xiaojun Zhao, Yijin Wang and Xiaodan Zhang
Algorithms 2026, 19(4), 285; https://doi.org/10.3390/a19040285 - 7 Apr 2026
Viewed by 210
Abstract
Path planning is a key technology in robot navigation and has long attracted significant attention. However, in scenarios with high-density or unstructured obstacle distributions, path planning methods based on swarm intelligence optimization still face issues of low computational efficiency and poor path quality, [...] Read more.
Path planning is a key technology in robot navigation and has long attracted significant attention. However, in scenarios with high-density or unstructured obstacle distributions, path planning methods based on swarm intelligence optimization still face issues of low computational efficiency and poor path quality, limiting their performance in real-time applications. To address these challenges, this paper defines path key points and proposes a path planning method based on the Key-Points Encoding Genetic Algorithm (KEGA). First, an encoding scheme is designed to map key-point sequences into binary encodings, guiding the population to explore efficiently. Then, a new path generation module is integrated using target point direction, local environment, and historical path information to generate high-quality key-point sequences, thereby improving path quality. Additionally, by evaluating key-point sequences as a proxy for full path evaluation, only one precise path construction is required per iteration, significantly reducing computational overhead. Experiments were conducted on four simulated maps with diverse obstacle distribution characteristics and eight real-world street maps to validate the method’s robustness and generalizability. The results show that, compared to the existing state-of-the-art robot path planning methods, the proposed method achieves an average runtime savings of 75.40%, a path length reduction of 35.65% and a path smoothness improvement of 68%. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

29 pages, 6490 KB  
Article
A Closed-Form Inverse Kinematic Analytical Method for a Humanoid Seven-DOF Redundant Manipulator
by Guojun Zhao, Ben Ye, Yunlong Tian, Juntong Yun, Du Jiang and Bo Tao
Machines 2026, 14(4), 395; https://doi.org/10.3390/machines14040395 - 4 Apr 2026
Viewed by 184
Abstract
Humanoid manipulators with kinematic redundancy offer enhanced dexterity and adaptability to complex environments. Solving their inverse kinematics (IK) is fundamental to trajectory tracking, motion planning, and real-time control. Conventional Jacobian-based iterative methods are widely used, but they are often sensitive to the initial [...] Read more.
Humanoid manipulators with kinematic redundancy offer enhanced dexterity and adaptability to complex environments. Solving their inverse kinematics (IK) is fundamental to trajectory tracking, motion planning, and real-time control. Conventional Jacobian-based iterative methods are widely used, but they are often sensitive to the initial guess, computationally expensive, and less effective in handling strict constraints. Arm-angle-based analytical parameterization reduces redundancy resolution to a single parameter. However, joint limits may lead to multiple disconnected feasible arm-angle intervals. Many existing methods still depend on a numerical search or intelligent optimization to select the arm angle. This lowers computational efficiency and provides less explicit control over branch and configuration selection. To address these issues, this paper extends the arm-angle analytical IK framework. It introduces global configuration parameters to explicitly control the shoulder-elbow-wrist configuration. It also completes the analytical derivation of the rotational relationships of the first three joints in the reference plane. In addition, a feasibility determination and modeling scheme for the arm-angle domain is established, which covers disconnected feasible intervals. The IK problem is then reformulated as a one-dimensional optimization over the feasible domain. An efficient interval-based search is employed to determine the optimal arm angle. Experimental results demonstrate high accuracy and interference-free trajectory tracking. Comparative tests on randomly sampled target poses are also performed. The results show more concentrated error distributions, shorter average computation time, and higher success rates. These results confirm the advantages of the proposed method in accuracy, robustness, and real-time performance. Full article
Show Figures

Figure 1

16 pages, 6392 KB  
Article
An Engineered clMagR Tetramer with Enhanced Magnetism for Magnetic Manipulation
by Peng Zhang, Xiujuan Zhou, Shenting Zhang, Peilin Yang, Zhu-An Xu, Xin Zhang, Junfeng Wang, Tiantian Cai, Yuebin Zhang and Can Xie
Biomolecules 2026, 16(4), 537; https://doi.org/10.3390/biom16040537 - 3 Apr 2026
Viewed by 313
Abstract
Biological manipulation via physical stimuli such as light and magnetism has become a central goal in modern biotechnology. Among these modalities, magnetic fields offer unique advantages, including deep tissue penetration and untethered interventions in living systems. An ideal platform for such a magnetogenetic [...] Read more.
Biological manipulation via physical stimuli such as light and magnetism has become a central goal in modern biotechnology. Among these modalities, magnetic fields offer unique advantages, including deep tissue penetration and untethered interventions in living systems. An ideal platform for such a magnetogenetic toolkit would be a genetically encodable protein with tunable magnetic features under physiological conditions. However, the development of such tools has been hindered by the lack of robust and stable protein scaffolds with strong intrinsic magnetic properties. Inspired by animal magnetoreception in nature, here, we rationally designed and systematically screened single-chain variants of the magnetoreceptor MagR. Through nine iterative rounds of design and experimental validation, we generated 25 constructs and ultimately identified a stable single-chain-dimer-based-tetramer, SDT-MagR, as the optimal magnetic molecular platform. This engineered protein exhibits exceptional structural stability and state-dependent magnetic behavior, showing ferrimagnetic-like characteristics in the solid state and paramagnetic behavior in solution. With enhanced magnetic susceptibility, purified SDT-MagR can be directly attracted by a magnet in vitro, establishing it as a promising new platform for future biomagnetic manipulation and magnetogenetics applications. Full article
(This article belongs to the Topic Metalloproteins and Metalloenzymes, 2nd Edition)
Show Figures

Figure 1

28 pages, 1551 KB  
Article
Opinion Quality Dynamic Management and Consensus Model with Quality Threshold for Group Decision Making
by Yanling Lu, Zhiying Wang and Yejun Xu
Systems 2026, 14(4), 393; https://doi.org/10.3390/systems14040393 - 3 Apr 2026
Viewed by 164
Abstract
In group decision making (GDM), experts from a variety of fields collaborate to select the best alternative. Due to external influences or a lack of sufficient knowledge, experts may sometimes offer low-quality opinions on alternatives. In existing GDM problems, the opinion quality and [...] Read more.
In group decision making (GDM), experts from a variety of fields collaborate to select the best alternative. Due to external influences or a lack of sufficient knowledge, experts may sometimes offer low-quality opinions on alternatives. In existing GDM problems, the opinion quality and the consensus with a quality threshold have never been explored simultaneously. To fill this gap, this paper proposes a novel GDM framework integrating opinion quality dynamic management and an improved minimum cost consensus model (MCCM) with a quality threshold in GDM. Firstly, opinion quality dynamic evaluations and management mechanisms are designed to improve the opinions of experts to some extent. Afterwards, the weights of the experts are determined by combining their social reputation and opinion quality. Furthermore, the impact of opinion quality is considered in the consensus, and an improved MCCM with a quality threshold is proposed to promote the consensus. A case study on selecting AI enterprises for an investment is provided to verify the applicability of the proposed opinion-quality-based GDM. Ultimately, the quantitative results show that the proposed model achieves a consensus cost of 411, which is 67.5% lower than the benchmark method M2. The proposed GDM framework only requires two iterations and satisfies the predefined opinion quality threshold and consensus level. The optimal alternative remains stable under various parameter settings, verifying the robustness and superiority of the proposed model. Full article
Show Figures

Figure 1

25 pages, 4371 KB  
Article
GTS-SLAM: A Tightly-Coupled GICP and 3D Gaussian Splatting Framework for Robust Dense SLAM in Underground Mines
by Yi Liu, Changxin Li and Meng Jiang
Vehicles 2026, 8(4), 79; https://doi.org/10.3390/vehicles8040079 - 3 Apr 2026
Viewed by 285
Abstract
To address unstable localization and sparse mapping for autonomous vehicles operating in GPS-denied and low-visibility environments, this paper proposes GTS-SLAM, a tightly coupled dense visual SLAM framework integrating Generalized Iterative Closest Point (GICP) and 3D Gaussian Splatting (3DGS). The system is designed for [...] Read more.
To address unstable localization and sparse mapping for autonomous vehicles operating in GPS-denied and low-visibility environments, this paper proposes GTS-SLAM, a tightly coupled dense visual SLAM framework integrating Generalized Iterative Closest Point (GICP) and 3D Gaussian Splatting (3DGS). The system is designed for intelligent driving platforms such as underground mining vehicles, inspection robots, and tunnel autonomous navigation systems. The front-end performs covariance-aware point-cloud registration using GICP to achieve robust pose estimation under low texture, dust interference, and dynamic disturbances. The back-end employs probabilistic dense mapping based on 3DGS, combined with scale regularization, scale alignment, and keyframe factor-graph optimization, enabling synchronized optimization of localization and mapping. A Compact-3DGS compression strategy further reduces memory usage while maintaining real-time performance. Experiments on public datasets and real underground-like scenarios demonstrate centimeter-level trajectory accuracy, high-quality dense reconstruction, and real-time rendering. The system provides reliable perception capability for vehicle autonomous navigation, obstacle avoidance, and path planning in confined and weak-light environments. Overall, the proposed framework offers a deployable solution for autonomous driving and mobile robots requiring accurate localization and dense environmental understanding in challenging conditions. Full article
(This article belongs to the Special Issue AI-Empowered Assisted and Autonomous Driving)
Show Figures

Figure 1

21 pages, 3333 KB  
Article
A Methodological Framework for Runtime Ontology Evolution in Dynamic Environments
by Valeria Seidita, Lucrezia Mosca and Antonio Chella
Appl. Sci. 2026, 16(7), 3494; https://doi.org/10.3390/app16073494 - 3 Apr 2026
Viewed by 229
Abstract
Intelligent systems operating in real-world environments are often required to make decisions in contexts that are only partially known at design time. In such scenarios, the assumption of a static and fully specified knowledge base becomes unrealistic, limiting the system’s ability to adapt [...] Read more.
Intelligent systems operating in real-world environments are often required to make decisions in contexts that are only partially known at design time. In such scenarios, the assumption of a static and fully specified knowledge base becomes unrealistic, limiting the system’s ability to adapt to novel situations. This challenge is particularly relevant for robotic systems, whose behavior cannot be entirely pre-programmed when operating in dynamic and evolving environments. This paper proposes a methodological and architectural approach for the runtime update of ontologies and knowledge bases, enabling intelligent systems to autonomously adapt their internal representation of the world during execution. The proposed approach enables the system to identify knowledge gaps by distinguishing between previously unknown concepts and known concepts enriched with newly observed instances, and to integrate such information into the ontology in a controlled and consistent manner. The approach is implemented as an end-to-end pipeline that combines visual perception, semantic interpretation through large language models, and a robust ontology update mechanism. Particular attention is devoted to ensuring formal consistency during runtime evolution, addressing challenges such as the generation of valid OWL constructs, the management of inverse properties, datatype normalization, and the prevention of semantic degradation over iterative updates. By enabling knowledge-driven adaptation at runtime, the proposed framework supports autonomous decision-making in environments that cannot be fully anticipated at design time. The approach was developed within the MUSIC4D and MHARA projects, which explore the use of intelligent systems in dynamic, partially structured contexts, focusing on knowledge-based adaptation. Full article
Show Figures

Figure 1

31 pages, 2050 KB  
Article
Capacity Price Pricing Method Considering Time-of-Use Load Characteristics
by Sirui Wang and Weiqing Sun
Energies 2026, 19(7), 1753; https://doi.org/10.3390/en19071753 - 3 Apr 2026
Viewed by 318
Abstract
The growing flexibility of load dispatching in modern smart grids has exposed critical limitations in conventional capacity pricing mechanisms, which calculate charges based solely on monthly maximum demand without distinguishing when peak demand occurs. This approach fails to reflect the temporal value of [...] Read more.
The growing flexibility of load dispatching in modern smart grids has exposed critical limitations in conventional capacity pricing mechanisms, which calculate charges based solely on monthly maximum demand without distinguishing when peak demand occurs. This approach fails to reflect the temporal value of capacity and provides insufficient incentives for demand-side optimization. To address these challenges, this paper proposes a time-of-use (TOU) capacity pricing method that integrates user load characteristics to enable more equitable cost allocation and optimized electricity consumption patterns. The methodology employs K-means clustering analysis of user load profiles to partition pricing periods, accurately capturing differential capacity value across temporal intervals. We validate the clustering approach through the elbow method and silhouette analysis, confirming k = 3 as optimal and demonstrating K-means superiority over hierarchical and density-based alternatives. This data-driven approach ensures that period delineation reflects actual consumption patterns of commercial and industrial users. A capacity cost allocation model is established using the Shapley value method, incorporating maximum demand in each designated period while maintaining revenue neutrality for the grid operator. The 80% load simultaneity factor is empirically validated using 12 months of Shanghai industrial data (May 2023–April 2024). A Stackelberg game-based pricing model for TOU capacity tariffs is developed, incentivizing users to deploy energy storage systems and optimize charging strategies. We prove game convergence theoretically and demonstrate equilibrium achievement within 3–5 iterations across diverse initialization scenarios. Energy storage capacity is optimized by sector (3.5–6.5% of peak demand) rather than uniformly, and realistic battery self-discharge rates (0.006%/hour) are incorporated. Case study analysis using real operational data from 11 commercial and industrial sub-sectors in Shanghai demonstrates effectiveness. Extended to 12 months with seasonal analysis, results show the proposed strategy reduces the peak-to-valley difference ratio by 2.4% [95% CI: 1.9%, 2.9%], p < 0.001; increases the system load factor by 1.3% [95% CI: 0.9%, 1.7%], p < 0.001; and achieves reductions in users’ total capacity costs of 3.6% [95% CI: −4.2%, −3.0%], p < 0.001. Comparative analysis shows the proposed method significantly outperforms simple TOU (improvement +1.2 pp) and peak-responsibility pricing (improvement +0.6 pp). Monte Carlo robustness analysis (1000 scenarios) confirms performance stability under demand uncertainty. This research provides theoretical foundations and practical methodologies for capacity cost allocation, offering valuable insights for policymakers and utilities seeking to enhance demand-side response mechanisms and improve power resource allocation efficiency. Full article
(This article belongs to the Section A: Sustainable Energy)
Show Figures

Figure 1

23 pages, 2936 KB  
Article
Lightweight Transient-Source Detection Method for Edge Computing
by Jiahao Zhang, Yutian Fu, Feng Dong and Lingfeng Huang
Universe 2026, 12(4), 101; https://doi.org/10.3390/universe12040101 - 1 Apr 2026
Viewed by 223
Abstract
Transient-source detection without relying on difference images still faces challenges in achieving high accuracy, especially under practical space-based astronomical survey conditions where the data volume is enormous, on-orbit transmission bandwidth is limited, and real-time response is required for rapid follow-up observations. To address [...] Read more.
Transient-source detection without relying on difference images still faces challenges in achieving high accuracy, especially under practical space-based astronomical survey conditions where the data volume is enormous, on-orbit transmission bandwidth is limited, and real-time response is required for rapid follow-up observations. To address these issues, this paper proposes a lightweight detection network that integrates multi-scale feature fusion with contextual feature extraction, enabling efficient real-time processing on resource-constrained edge devices. The proposed model enhances robustness to point-spread-function variations across observation conditions and to complex background environments, while simultaneously improving detection accuracy. To evaluate performance comprehensively, lightweight VGG and lightweight ResNet architectures and other baseline models—commonly used as baselines for transient-source detection—are adopted for comparison. Experimental results show that under the condition that the models have approximately the same number of parameters, the proposed network achieves the best accuracy, obtaining nearly 1% improvement compared with the best-performing baseline model. Based on this design, an ultra-lightweight version with only 7k parameters is further developed by incorporating a compact multi-scale module, improving accuracy by 1% over the version without the multi-scale structure. Moreover, through heterogeneous knowledge distillation and adaptive iterative training, the accuracy of the ultra-lightweight model is further increased from 93.3% to 94.0%. Finally, the model is deployed and validated on an AI hardware acceleration platform. The results demonstrate that the proposed method substantially improves inference throughput while maintaining high accuracy, providing a practical solution for real-time, low-latency, on-device transient-source detection under large data volume and limited transmission conditions. Specifically, the proposed models are trained offline on a high-performance GPU and subsequently deployed on the Fudan Microelectronics 7100 AI board to evaluate their real-world inference efficiency on resource-constrained edge devices. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Modern Astronomy)
Show Figures

Figure 1

Back to TopTop