Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,325)

Search Parameters:
Keywords = complex uncertainty

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 1592 KB  
Article
Information-Theoretic Reliability Analysis of Consecutive r-out-of-n:G Systems via Residual Extropy
by Anfal A. Alqefari, Ghadah Alomani, Faten Alrewely and Mohamed Kayid
Entropy 2025, 27(11), 1090; https://doi.org/10.3390/e27111090 (registering DOI) - 22 Oct 2025
Abstract
This paper develops an information-theoretic reliability inference framework for consecutive r-out-of-n:G systems by employing the concept of residual extropy, a dual measure to entropy. Explicit analytical representations are established in tractable cases, while novel bounds are derived for more complex [...] Read more.
This paper develops an information-theoretic reliability inference framework for consecutive r-out-of-n:G systems by employing the concept of residual extropy, a dual measure to entropy. Explicit analytical representations are established in tractable cases, while novel bounds are derived for more complex lifetime models, providing effective tools when closed-form expressions are unavailable. Preservation properties under classical stochastic orders and aging notions are examined, together with monotonicity and characterization results that offer deeper insights into system uncertainty. A conditional formulation, in which all components are assumed operational at a given time, is also investigated, yielding new theoretical findings. From an inferential perspective, we propose a maximum likelihood estimator of residual extropy under exponential lifetimes, supported by simulation studies and real-world reliability data. These contributions highlight residual extropy as a powerful information-theoretic tool for modeling, estimation, and decision-making in multicomponent reliability systems, thereby aligning with the objectives of statistical inference through entropy-like measures. Full article
(This article belongs to the Special Issue Recent Progress in Uncertainty Measures)
24 pages, 14995 KB  
Article
A Novel Method for Predicting Oil and Gas Resource Potential Based on Ensemble Learning BP-Neural Network: Application to Dongpu Depression, Bohai Bay Basin, China
by Zijie Yang, Dongxia Chen, Qiaochu Wang, Sha Li, Fuwei Wang, Shumin Chen, Wanrong Zhang, Dongsheng Yao, Yuchao Wang and Han Wang
Energies 2025, 18(21), 5562; https://doi.org/10.3390/en18215562 (registering DOI) - 22 Oct 2025
Abstract
Assessing and forecasting hydrocarbon resource potential (HRP) is of great significance. However, due to the complexity and uncertainty of geological conditions during hydrocarbon accumulation, it is challenging to accurately establish HRP models. This study employs machine learning methods to construct a HRP assessment [...] Read more.
Assessing and forecasting hydrocarbon resource potential (HRP) is of great significance. However, due to the complexity and uncertainty of geological conditions during hydrocarbon accumulation, it is challenging to accurately establish HRP models. This study employs machine learning methods to construct a HRP assessment model. First, nine primary controlling factors were selected from the five key conditions for HRP: source rock, reservoir, trap, migration, and accumulation. Subsequently, three prediction models were developed based on the backpropagation (BP) neural network, BP-Bagging algorithm, and BP-AdaBoost algorithm, with hydrocarbon resources abundance as the output metric. These models were applied to the Dongpu Depression in the Bohai Bay Basin for performance evaluation and optimization. Finally, this study examined the importance of various variables in predicting HRP and analyzed model uncertainty. The results indicate that the BP-AdaBoost model outperforms the others. On the test dataset, the BP-AdaBoost model achieved an R2 value of 0.77, compared to 0.73 for the BP-Bagging model and only 0.64 for the standard BP model. Variable importance analysis revealed that trap area, sandstone thickness, sedimentary facies type, and distance to faults significantly contribute to HRP. Furthermore, model accuracy is influenced by multiple factors, including the selection and quantification of geological parameters, dataset size and distribution characteristics, and the choice of machine learning algorithm models. In summary, machine learning provides a reliable method for assessing HRP, offering new insights for identifying high-quality exploration blocks and optimizing development strategies. Full article
Show Figures

Figure 1

24 pages, 829 KB  
Article
Robust and Non-Fragile Path Tracking Control for Autonomous Vehicles
by Ilhan Lee and Jaewon Nah
Actuators 2025, 14(11), 510; https://doi.org/10.3390/act14110510 (registering DOI) - 22 Oct 2025
Abstract
Path tracking is a fundamental function for autonomous vehicles, but its performance often degrades under parameter variations and controller fragility—an issue seldom addressed together in prior studies. This paper develops a robust non-fragile Linear Quadratic Regulator (LQR) using linear matrix inequality (LMI) optimization, [...] Read more.
Path tracking is a fundamental function for autonomous vehicles, but its performance often degrades under parameter variations and controller fragility—an issue seldom addressed together in prior studies. This paper develops a robust non-fragile Linear Quadratic Regulator (LQR) using linear matrix inequality (LMI) optimization, explicitly considering uncertainties in vehicle speed, mass, and cornering stiffness as well as gain perturbations from implementation. A two-degrees-of-freedom bicycle model is employed for controller design, and a weighted least-squares allocation method integrates multiple actuators, including front steering, rear steering, four-wheel independent drive, and braking. A double lane-change maneuver in CarSim evaluates the proposed design. The robust and non-fragile LQR maintains lateral offset within 0.02 m and overshoot below 1% under ±20% parameter variation, offering improved stability margins compared with the baseline LQR. The results highlight context-dependent actuator effects and clarify the trade-off between control complexity, robustness, and real-world applicability. Full article
(This article belongs to the Special Issue Feature Papers in Actuators for Surface Vehicles)
29 pages, 1415 KB  
Article
Type-2 Backstepping T-S Fuzzy Control Based on Niche Situation
by Yang Cai, Yunli Hao and Yongfang Qi
Math. Comput. Appl. 2025, 30(6), 117; https://doi.org/10.3390/mca30060117 (registering DOI) - 22 Oct 2025
Abstract
The niche situation can reflect the advantages and disadvantages of biological individuals in the ecosystem environment as well as the overall operational status of the ecosystem. However, higher-order niche systems generally exhibit complex nonlinearities and parameter uncertainties, making it difficult for traditional Type-1 [...] Read more.
The niche situation can reflect the advantages and disadvantages of biological individuals in the ecosystem environment as well as the overall operational status of the ecosystem. However, higher-order niche systems generally exhibit complex nonlinearities and parameter uncertainties, making it difficult for traditional Type-1 fuzzy control to accurately handle their inherent fuzziness and environmental disturbances in complex environments. To address this, this paper introduces the backstepping control method based on Type-2 T-S fuzzy control, incorporating the niche situation function as the consequent of the T-S backstepping fuzzy control. The stability analysis of the system is completed by constructing a Lyapunov function, and the adaptive law for the parameters of the niche situation function is derived. This design reflects the tendency of biological individuals to always develop in a direction beneficial to themselves, highlighting the bio-inspired intelligent characteristics of the proposed method. The results of case simulations show that the Type-2 backstepping T-S fuzzy control has significantly superior comprehensive performance in dealing with the complexity and uncertainty of high-order niche situation systems compared with the traditional Type-1 control and Type-2 T-S adaptive fuzzy control. These results not only verify the adaptive and self-development capabilities of biological individuals, as well as their efficiency in environmental utilization, but also endow this control method with a solid practical foundation. Full article
25 pages, 5852 KB  
Article
ADEmono-SLAM: Absolute Depth Estimation for Monocular Visual Simultaneous Localization and Mapping in Complex Environments
by Kaijun Zhou, Zifei Yu, Xiancheng Zhou, Ping Tan, Yunpeng Yin and Huanxin Luo
Electronics 2025, 14(20), 4126; https://doi.org/10.3390/electronics14204126 - 21 Oct 2025
Abstract
Aiming to address the problems of scale uncertainty and dynamic object interference in monocular visual simultaneous localization and mapping (SLAM), this paper proposes an absolute depth estimation network-based monocular visual SLAM method, namely, ADEmono-SLAM. Firstly, some detail features including oriented fast and rotated [...] Read more.
Aiming to address the problems of scale uncertainty and dynamic object interference in monocular visual simultaneous localization and mapping (SLAM), this paper proposes an absolute depth estimation network-based monocular visual SLAM method, namely, ADEmono-SLAM. Firstly, some detail features including oriented fast and rotated brief (ORB) features of input image are extracted. An object depth map is obtained through an absolute depth estimation network, and some reliable feature points are obtained by a dynamic interference filtering algorithm. Through these operations, the potential dynamic interference points are eliminated. Secondly, the absolute depth image is obtained by using the monocular depth estimation network, in which a dynamic point elimination algorithm using target detection is designed to eliminate dynamic interference points. Finally, the camera poses and map information are obtained by static feature point matching optimization. Thus, the remote points are randomly filtered by combining the depth values of the feature points. Experiments on the karlsruhe institute of technology and toyota technological institute (KITTI) dataset, technical university of munich (TUM) dataset, and mobile robot platform show that the proposed method can obtain sparse maps with absolute scale and improve the pose estimation accuracy of monocular SLAM in various scenarios. Compared with existing methods, the maximum error is reduced by about 80%, which provides an effective method or idea for the application of monocular SLAM in the complex environment. Full article
(This article belongs to the Special Issue Digital Intelligence Technology and Applications, 2nd Edition)
Show Figures

Figure 1

28 pages, 6242 KB  
Article
Numerical Prediction of the NPSH Characteristics in Centrifugal Pumps
by Matej Štefanič
Fluids 2025, 10(10), 274; https://doi.org/10.3390/fluids10100274 - 21 Oct 2025
Abstract
This study focuses on the numerical analysis of a centrifugal pump’s suction capability, aiming to reliably predict its suction performance characteristics. The main emphasis of the research was placed on the influence of different turbulence models, the quality of the computational mesh, and [...] Read more.
This study focuses on the numerical analysis of a centrifugal pump’s suction capability, aiming to reliably predict its suction performance characteristics. The main emphasis of the research was placed on the influence of different turbulence models, the quality of the computational mesh, and the comparison between steady-state and unsteady numerical approaches. The results indicate that steady-state simulations provide an unreliable description of cavitation development, especially at lower flow rates where strong local pressure fluctuations are present. The unsteady k–ω SST model provides the best overall agreement with experimental NPSH3 characteristics, as confirmed by the lowest mean deviation (within the ISO 9906 tolerance band, corresponding to an overall uncertainty of ±5.5%) and by multiple operating points falling entirely within this range. This represents one of the first detailed unsteady CFD verifications of NPSH prediction in centrifugal pumps operating at high rotational speeds (above 2900 rpm), achieving a mean deviation below ±5.5% and demonstrating improved predictive capability compared to conventional steady-state approaches. The analysis also includes an evaluation of the cavitation volume fraction and a depiction of pressure conditions on the impeller as functions of flow rate and inlet pressure. In conclusion, this study highlights the potential of advanced hybrid turbulence models (such as SAS or DES) as a promising direction for future research, which could further improve the prediction of complex cavitation phenomena in centrifugal pumps. Full article
(This article belongs to the Section Mathematical and Computational Fluid Mechanics)
Show Figures

Figure 1

31 pages, 1868 KB  
Article
Information Content and Maximum Entropy of Compartmental Systems in Equilibrium
by Holger Metzler and Carlos A. Sierra
Entropy 2025, 27(10), 1085; https://doi.org/10.3390/e27101085 - 21 Oct 2025
Abstract
Mass-balanced compartmental systems defy classical deterministic entropy measures since both metric and topological entropy vanish in dissipative dynamics. By interpreting open compartmental systems as absorbing continuous-time Markov chains that describe the random journey of a single representative particle, we allow established information-theoretic principles [...] Read more.
Mass-balanced compartmental systems defy classical deterministic entropy measures since both metric and topological entropy vanish in dissipative dynamics. By interpreting open compartmental systems as absorbing continuous-time Markov chains that describe the random journey of a single representative particle, we allow established information-theoretic principles to be applied to this particular type of deterministic dynamical system. In particular, path entropy quantifies the uncertainty of complete trajectories, while entropy rates measure the average uncertainty of instantaneous transitions. Using Shannon’s information entropy, we derive closed-form expressions for these quantities in equilibrium and extend the maximum entropy principle (MaxEnt) to the problem of model selection in compartmental dynamics. This information-theoretic framework not only provides a systematic way to address equifinality but also reveals hidden structural properties of complex systems such as the global carbon cycle. Full article
Show Figures

Figure 1

22 pages, 1585 KB  
Article
Sustainable Control of Large-Scale Industrial Systems via Approximate Optimal Switching with Standard Regulators
by Alexander Chupin, Zhanna Chupina, Oksana Ovchinnikova, Marina Bolsunovskaya, Alexander Leksashov and Svetlana Shirokova
Sustainability 2025, 17(20), 9337; https://doi.org/10.3390/su17209337 - 21 Oct 2025
Abstract
Large-scale production systems (LSPS) operate under growing complexity driven by digital transformation, tighter environmental regulations, and the demand for resilient and resource-efficient operation. Conventional control strategies, particularly PID and isodromic regulators, remain dominant in industrial automation due to their simplicity and robustness; however, [...] Read more.
Large-scale production systems (LSPS) operate under growing complexity driven by digital transformation, tighter environmental regulations, and the demand for resilient and resource-efficient operation. Conventional control strategies, particularly PID and isodromic regulators, remain dominant in industrial automation due to their simplicity and robustness; however, their capability to achieve near-optimal performance is limited under constraints on control amplitude, rate, and energy consumption. This study develops an analytical–computational approach for the approximate realization of optimal nonlinear control using standard regulator architectures. The method determines switching moments analytically and incorporates practical feasibility conditions that account for nonlinearities, measurement noise, and actuator limitations. A comprehensive robustness analysis and simulation-based validation were conducted across four representative industrial scenarios—energy, chemical, logistics, and metallurgy. The results show that the proposed control strategy reduces transient duration by up to 20%, decreases overshoot by a factor of three, and lowers transient energy losses by 5–8% compared with baseline configurations, while maintaining bounded-input–bounded-output (BIBO) stability under parameter uncertainty and external disturbances. The framework provides a clear implementation pathway combining analytical tuning with observer-based derivative estimation, ensuring applicability in real industrial environments without requiring complex computational infrastructure. From a broader sustainability perspective, the proposed method contributes to the reliability, energy efficiency, and longevity of industrial systems. By reducing transient energy demand and mechanical wear, it supports sustainable production practices consistent with the following United Nations Sustainable Development Goals—SDG 7 (Affordable and Clean Energy), SDG 9 (Industry, Innovation and Infrastructure), and SDG 12 (Responsible Consumption and Production). The presented results confirm both the theoretical soundness and practical feasibility of the approach, while experimental validation on physical setups is identified as a promising direction for future research. Full article
(This article belongs to the Special Issue Large-Scale Production Systems: Sustainable Manufacturing and Service)
Show Figures

Figure 1

19 pages, 483 KB  
Article
Probabilistic Models for Military Kill Chains
by Stephen Adams, Alex Kyer, Brian Lee, Dan Sobien, Laura Freeman and Jeremy Werner
Systems 2025, 13(10), 924; https://doi.org/10.3390/systems13100924 - 20 Oct 2025
Abstract
Military kill chains are the sequence of events, tasks, or functions that must occur to successfully accomplish a mission. As the Department of Defense moves towards Combined Joint All-Domain Command and Control, which will require the coordination of multiple networked assets with the [...] Read more.
Military kill chains are the sequence of events, tasks, or functions that must occur to successfully accomplish a mission. As the Department of Defense moves towards Combined Joint All-Domain Command and Control, which will require the coordination of multiple networked assets with the ability to share data and information, kill chains must evolve into kill webs with multiple paths to achieve a successful mission outcome. Mathematical frameworks for kill webs provide the basis for addressing the complexity of this system-of-systems analysis. A mathematical framework for kill chains and kill webs would provide a military decision maker a structure for assessing several key aspects to mission planning including the probability of success, alternative chains, and parts of the chain that are likely to fail. However, to the best of our knowledge, a generalized and flexible mathematical formulation for kill chains in military operations does not exist. This study proposes four probabilistic models for kill chains that can later be adapted to kill webs. For each of the proposed models, events in the kill chain are modeled as Bernoulli random variables. This extensible modeling scaffold allows flexibility in constructing the probability of success for each event and is compatible with Monte Carlo simulations and hierarchical Bayesian formulations. The probabilistic models can be used to calculate the probability of a successful kill chain and to perform uncertainty quantification. The models are demonstrated on the Find–Fix–Track–Target–Engage–Assess kill chain. In addition to the mathematical framework, the MIMIK (Mission Illustration and Modeling Interface for Kill webs) software package has been developed and publicly released to support the design and analysis of kill webs. Full article
Show Figures

Figure 1

24 pages, 1841 KB  
Article
A Framework for the Configuration and Operation of EV/FCEV Fast-Charging Stations Integrated with DERs Under Uncertainty
by Leon Fidele Nishimwe H., Kyung-Min Song and Sung-Guk Yoon
Electronics 2025, 14(20), 4113; https://doi.org/10.3390/electronics14204113 - 20 Oct 2025
Abstract
The integration of electric vehicles (EVs) and fuel-cell electric vehicles (FCEVs) requires accessible and profitable facilities for fast charging. To promote fast-charging stations (FCSs), a systematic analysis that encompasses both planning and operation is required, including the incorporation of multi-energy resources and uncertainty. [...] Read more.
The integration of electric vehicles (EVs) and fuel-cell electric vehicles (FCEVs) requires accessible and profitable facilities for fast charging. To promote fast-charging stations (FCSs), a systematic analysis that encompasses both planning and operation is required, including the incorporation of multi-energy resources and uncertainty. This paper presents an optimization framework that addresses a joint strategy for the configuration and operation of an EV/FCEV fast-charging station (FCS) integrated with distributed energy resources (DERs) and hydrogen systems. The framework incorporates uncertainties related to solar photovoltaic (PV) generation and demand for EVs/FCEVs. The proposed joint strategy comprises a four-phase decision-making framework. Phase 1 involves modeling EV/FECE demand, while Phase 2 focuses on determining an optimal long-term infrastructure configuration. Subsequently, in Phase 3, the operator optimizes daily power scheduling to maximize profit. A real-time uncertainty update is then executed in Phase 4 upon the realization of uncertainty. The proposed optimization framework, formulated as mixed-integer quadratic programming (MIQP), considers configuration investment, operational, maintenance, and penalty costs for excessive grid power usage. A heuristic algorithm is proposed to solve this problem. It yields good results with significantly less computational complexity. A case study shows that under the most adverse conditions, the proposed joint strategy increases the FCS owner’s profit by 3.32% compared with the deterministic benchmark. Full article
(This article belongs to the Special Issue Advanced Research in Technology and Information Systems, 2nd Edition)
Show Figures

Figure 1

15 pages, 548 KB  
Article
A GAN-Based Approach Incorporating Dempster–Shafer Theory to Mitigate Rating Noise in Collaborative Filtering
by Ouahiba Belgacem, Boudjemaa Boudaa, Abderrahmane Kouadria and Abdelhafid Abouaissa
Digital 2025, 5(4), 57; https://doi.org/10.3390/digital5040057 - 20 Oct 2025
Abstract
Collaborative filtering (CF) continues to be a fundamental approach in recommendation systems for providing users with personalized suggestions. However, such kind of recommender systems are prone to performance issues when faced with noisy, inconsistent, or deliberately manipulated user ratings. Although Generative Adversarial Networks [...] Read more.
Collaborative filtering (CF) continues to be a fundamental approach in recommendation systems for providing users with personalized suggestions. However, such kind of recommender systems are prone to performance issues when faced with noisy, inconsistent, or deliberately manipulated user ratings. Although Generative Adversarial Networks (GANs) offer promising solutions to capture complex user-item interactions in these CF situations, many existing GAN-based methods assume uniform reliability across all ratings, reducing their effectiveness under uncertain conditions. To overcome this challenge, this paper presents DST-AttentiveGAN to introduce a confidence-aware adversarial framework specifically designed to denoise inconsistent ratings in collaborative filtering scenarios. The proposed approach employs Dempster-Shafer Theory (DST) to compute confidence scores by aggregating diverse behavioral indicators, such as item popularity, user activity, and rating variance. These scores guide both components of the GAN architecture in which the generator incorporates a cross-attention mechanism to highlight trustworthy features, while the discriminator uses DST-based confidence to evaluate the credibility of input ratings. Training is carried out using a stabilized Wasserstein GAN objective that promotes both robustness and convergence efficiency. Experimental results in three benchmark data sets show that DST-AttentiveGAN consistently surpasses conventional GAN-based models, delivering more accurate and reliable recommendations under conditions of uncertainty. Full article
Show Figures

Figure 1

21 pages, 1266 KB  
Article
Risk Assessment of Offshore Wind–Solar–Current Energy Coupling Hydrogen Production Project Based on Hybrid Weighting Method and Aggregation Operator
by Yandong Du, Xiaoli Chen, Yao Dong, Xinyue Zhou, Yangwen Wu and Qiang Lu
Energies 2025, 18(20), 5525; https://doi.org/10.3390/en18205525 - 20 Oct 2025
Abstract
Under the dual pressures of global climate change and energy structure transition, the offshore wind–solar–current energy coupling hydrogen production (OCWPHP) system has emerged as a promising integrated energy solution. However, its complex multi-energy structure and harsh marine environment introduce systemic risks that are [...] Read more.
Under the dual pressures of global climate change and energy structure transition, the offshore wind–solar–current energy coupling hydrogen production (OCWPHP) system has emerged as a promising integrated energy solution. However, its complex multi-energy structure and harsh marine environment introduce systemic risks that are challenging to assess comprehensively using traditional methods. To address this, we develop a novel risk assessment framework based on hesitant fuzzy sets (HFS), establishing a multidimensional risk criteria system covering economic, technical, social, political, and environmental aspects. A hybrid weighting method integrating AHP, entropy weighting, and consensus adjustment is proposed to determine expert weights while minimizing risk information loss. Two aggregation operators—AHFOWA and AHFOWG—are applied to enhance uncertainty modeling. A case study of an OCWPHP project in the East China Sea is conducted, with the overall risk level assessed as “Medium.” Comparative analysis with the classical Cumulative Prospect Theory (CPT) method shows that our approach yields a risk value of 0.4764, closely aligning with the CPT result of 0.4745, thereby confirming the feasibility and credibility of the proposed framework. This study provides both theoretical support and practical guidance for early-stage risk assessment of OCWPHP projects. Full article
(This article belongs to the Section A3: Wind, Wave and Tidal Energy)
Show Figures

Figure 1

22 pages, 4780 KB  
Article
A Fusion Estimation Method for Tire-Road Friction Coefficient Based on Weather and Road Images
by Jiye Huang, Xinshi Chen, Qingsong Jin and Ping Li
Lubricants 2025, 13(10), 459; https://doi.org/10.3390/lubricants13100459 - 20 Oct 2025
Abstract
The tire-road friction coefficient (TRFC) is a critical parameter that significantly influences vehicle safety, handling stability, and driving comfort. Existing estimation methods based on vehicle dynamics suffer from a substantial decline in accuracy under conditions with insufficient excitation, while vision-based approaches are often [...] Read more.
The tire-road friction coefficient (TRFC) is a critical parameter that significantly influences vehicle safety, handling stability, and driving comfort. Existing estimation methods based on vehicle dynamics suffer from a substantial decline in accuracy under conditions with insufficient excitation, while vision-based approaches are often limited by the generalization ability of their datasets, making them less effective in complex and variable real-driving environments. To address these challenges, this paper proposes a novel, low-cost fusion method for TRFC estimation that integrates weather conditions and road image data. The proposed approach begins by employing semantic segmentation to partition the input images into distinct regions—sky and road. The segmented images will be fed into the road recognition network and the weather recognition network for road type and weather classification. Furthermore, a fusion decision tree incorporating an uncertainty modeling mechanism is introduced to dynamically integrate these multi-source features, thereby enhancing the robustness of the estimation. Experimental results demonstrate that the proposed method maintains stable and reliable estimation performance even on unseen road surfaces, outperforming single-modality methods significantly. This indicates its high practical value and promising potential for broad application. Full article
Show Figures

Figure 1

23 pages, 321 KB  
Article
Nonlinear Shrinkage Estimation of Higher-Order Moments for Portfolio Optimization Under Uncertainty in Complex Financial Systems
by Wanbo Lu and Zhenzhong Tian
Entropy 2025, 27(10), 1083; https://doi.org/10.3390/e27101083 - 20 Oct 2025
Abstract
This paper develops a nonlinear shrinkage estimation method for higher-order moment matrices within a multifactor model framework and establishes its asymptotic consistency under high-dimensional settings. The approach extends the nonlinear shrinkage methodology from covariance to higher-order moments, thereby mitigating the “curse of dimensionality” [...] Read more.
This paper develops a nonlinear shrinkage estimation method for higher-order moment matrices within a multifactor model framework and establishes its asymptotic consistency under high-dimensional settings. The approach extends the nonlinear shrinkage methodology from covariance to higher-order moments, thereby mitigating the “curse of dimensionality” and alleviating estimation uncertainty in high-dimensional settings. Monte Carlo simulations demonstrate that, compared with linear shrinkage estimation, the proposed method substantially reduces mean squared errors (MSEs) and achieves greater Percentage Relative Improvement in Average Loss (PRIAL) for covariance and cokurtosis estimates; relative to sample estimation, it delivers significant gains in mitigating uncertainty for covariance, coskewness, and cokurtosis. An empirical portfolio analysis incorporating higher-order moments shows that, when the asset universe is large, portfolios based on the nonlinear shrinkage estimator outperform those constructed using linear shrinkage and sample estimators, achieving higher annualized return and Sharpe ratio with lower kurtosis and maximum drawdown, thus providing stronger resilience against uncertainty in complex financial systems. In smaller asset universes, nonlinear shrinkage portfolios perform on par with their linear shrinkage counterparts. These findings highlight the potential of nonlinear shrinkage techniques to reduce uncertainty in higher-order moment estimation and to improve portfolio performance across diverse and complex investment environments. Full article
(This article belongs to the Special Issue Complexity and Synchronization in Time Series)
24 pages, 424 KB  
Article
Canonical Quantization of Metric Tensor for General Relativity in Pseudo-Riemannian Geometry
by Abdel Nasser Tawfik, Salah G. Elgendi, Sameh Shenawy and Mahmoud Hanafy
Physics 2025, 7(4), 52; https://doi.org/10.3390/physics7040052 - 20 Oct 2025
Abstract
By extending the four-dimensional semi-Riemann geometry to higher-dimensional Finsler/Hamilton geometry, the canonical quantization of the fundamental metric tensor of general relativity, i.e., an approach that tackles a geometric quantity, is derived. With this quantization, the smooth continuous Finsler structure is transformed into a [...] Read more.
By extending the four-dimensional semi-Riemann geometry to higher-dimensional Finsler/Hamilton geometry, the canonical quantization of the fundamental metric tensor of general relativity, i.e., an approach that tackles a geometric quantity, is derived. With this quantization, the smooth continuous Finsler structure is transformed into a quantized Hamilton structure through the kinematics of a free-falling quantum particle with a positive mass, along with the introduction of the relativistic generalized uncertainty principle (RGUP) that generalizes quantum mechanics by integrating gravity. This transformation ensures the preservation of the positive one-homogeneity of both Finsler and Hamilton structures, while the RGUP dictates modifications in the noncommutative relations due to integrating consequences of relativistic gravitational fields in quantum mechanics. The anisotropic conformal transformation of the resulting metric tensor and its inverse in higher-dimensional spaces has been determined, particularly highlighting their translations to the four-dimensional fundamental metric tensor and its inverse. It is essential to recognize the complexity involved in computing the fundamental inverse metric tensor during a conformal transformation, as it is influenced by variables like spatial coordinates and directional orientation, making it a challenging task, especially in tensorial terms. We conclude that the derivations in this study are not limited to the structure in tangent and cotangent bundles, which might include both spacetime and momentum space, but are also applicable to higher-dimensional contexts. The theoretical framework of quantization of general relativity based on quantizing its metric tensor is primarily grounded in the four-dimensional metric tensor and its inverse in pseudo-Riemannian geometry. Full article
(This article belongs to the Special Issue Beyond the Standard Models of Physics and Cosmology: 2nd Edition)
Back to TopTop