Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (343)

Search Parameters:
Keywords = maximum entropy principle

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 2759 KB  
Article
Inverse Inference of Component Reliability for k-Out-of-n Systems Based on Maximum Entropy and Hazard-Rate Matrix Representation
by Chao Li, Tianci Gong, Daoqing Zhou, Jingjing He and Xuefei Guan
Mathematics 2026, 14(7), 1181; https://doi.org/10.3390/math14071181 - 1 Apr 2026
Viewed by 280
Abstract
This study presents a rational inverse inference framework for k-out-of-n systems that derives component-level reliability characteristics from system-level failure and monitoring data. The framework employs a hazard-rate matrix to represent the degradation hierarchy and applies the principle of maximum entropy to [...] Read more.
This study presents a rational inverse inference framework for k-out-of-n systems that derives component-level reliability characteristics from system-level failure and monitoring data. The framework employs a hazard-rate matrix to represent the degradation hierarchy and applies the principle of maximum entropy to allocate system-level probabilities to latent component-state configurations without bias, yielding analytical solutions for component hazard rates. The key innovation lies in combining maximum entropy with the hazard-rate matrix, which overcomes the ill-posed nature of the inverse problem and enables systematic integration of heterogeneous auxiliary information within a unified formulation, including system-level multi-state observations, component-wise moment constraints, sub-component data, and inter-component dependencies. This flexibility addresses a major limitation of existing inverse methods, such as Bayesian approaches, which are typically restricted to a single data type and often require strong prior assumptions or extensive failure datasets. The practical applicability of the framework is demonstrated through a case study of a west-to-east gas pipeline pumping system, highlighting its effectiveness in processing multiple information types and delivering actionable component-level reliability assessments for maintenance decision support. To the best of our knowledge, this is the first study to formulate and solve the inverse inference problem for k-out-of-n systems in a theoretically grounded and information-theoretically optimal manner. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

18 pages, 339 KB  
Article
Entropy-Based Portfolio Optimization in Cryptocurrency Markets: A Unified Maximum Entropy Framework
by Silvia Dedu and Florentin Șerban
Entropy 2026, 28(3), 285; https://doi.org/10.3390/e28030285 - 2 Mar 2026
Viewed by 563
Abstract
Traditional mean–variance portfolio optimization proves inadequate for cryptocurrency markets, where extreme volatility, fat-tailed return distributions, and unstable correlation structures undermine the validity of variance as a comprehensive risk measure. To address these limitations, this paper proposes a unified entropy-based portfolio optimization framework grounded [...] Read more.
Traditional mean–variance portfolio optimization proves inadequate for cryptocurrency markets, where extreme volatility, fat-tailed return distributions, and unstable correlation structures undermine the validity of variance as a comprehensive risk measure. To address these limitations, this paper proposes a unified entropy-based portfolio optimization framework grounded in the Maximum Entropy Principle (MaxEnt). Within this setting, Shannon entropy, Tsallis entropy, and Weighted Shannon Entropy (WSE) are formally derived as particular specifications of a common constrained optimization problem solved via the method of Lagrange multipliers, ensuring analytical coherence and mathematical transparency. Moreover, the proposed MaxEnt formulation provides an information-theoretic interpretation of portfolio diversification as an inference problem under uncertainty, where optimal allocations correspond to the least informative distributions consistent with prescribed moment constraints. In this perspective, entropy acts as a structural regularizer that governs the geometry of diversification rather than as a direct proxy for risk. This interpretation strengthens the conceptual link between entropy, uncertainty quantification, and decision-making in complex financial systems, offering a robust and distribution-free alternative to classical variance-based portfolio optimization. The proposed framework is empirically illustrated using a portfolio composed of major cryptocurrencies—Bitcoin (BTC), Ethereum (ETH), Solana (SOL), and Binance Coin (BNB)—based on weekly return data. The results reveal systematic differences in the diversification behavior induced by each entropy measure: Shannon entropy favors near-uniform allocations, Tsallis entropy imposes stronger penalties on concentration and enhances robustness to tail risk, while WSE enables the incorporation of asset-specific informational weights reflecting heterogeneous market characteristics. From a theoretical perspective, the paper contributes a coherent MaxEnt formulation that unifies several entropy measures within a single information-theoretic optimization framework, clarifying the role of entropy as a structural regularizer of diversification. From an applied standpoint, the results indicate that entropy-based criteria yield stable and interpretable allocations across turbulent market regimes, offering a flexible alternative to classical risk-based portfolio construction. The framework naturally extends to dynamic multi-period settings and alternative entropy formulations, providing a foundation for future research on robust portfolio optimization under uncertainty. Full article
30 pages, 1153 KB  
Article
Some Additional Principles of Living Systems Functioning and Their Application for Expanding the Theory of a Possible Typology of National Food Systems Strategies
by Pavel Brazhnikov
Systems 2026, 14(3), 230; https://doi.org/10.3390/systems14030230 - 25 Feb 2026
Viewed by 517
Abstract
This article describes the basic principles of the functioning of living systems, which distinguish them from other systems. The concept of dividing living systems’ resources into matter and energy has been expanded by describing their contribution to systems’ entropy. Within social systems, human [...] Read more.
This article describes the basic principles of the functioning of living systems, which distinguish them from other systems. The concept of dividing living systems’ resources into matter and energy has been expanded by describing their contribution to systems’ entropy. Within social systems, human individuals serve as the functional equivalent of energy in ordinary living systems, acting as the driving and redistributive force with respect to matter. Furthermore, additional characteristics of system resources that impact the strategies of living systems regarding their resources have been introduced. Additionally, the maximum rate of development of living systems under ideal conditions has been demonstrated. Based on the above, this article presents the most natural sequence of changes of living systems in relation to their sources of matter and energy. Moreover, such a sequence of strategy changes is also considered for national food systems in which infrastructure elements and workers represent matter and energy. This article can provide a valuable initial insight into the degree of correspondence between the general structural organization of state food systems and the operational conditions under which they function. Full article
Show Figures

Figure 1

22 pages, 1439 KB  
Article
A Thermodynamic Closure Model for Titan’s Surface Temperature: Its Long-Term Stability Anchored to Methane’s Triple Point
by Hsien-Wang Ou
Geosciences 2026, 16(2), 90; https://doi.org/10.3390/geosciences16020090 - 22 Feb 2026
Viewed by 402
Abstract
We develop a minimal thermodynamic model to predict Titan’s surface temperature based on radiative–convective equilibrium and the principle of maximum entropy production (MEP). The model retains only the essential atmospheric constituents: gaseous methane, which absorbs both longwave and near-infrared radiation, and stratospheric haze, [...] Read more.
We develop a minimal thermodynamic model to predict Titan’s surface temperature based on radiative–convective equilibrium and the principle of maximum entropy production (MEP). The model retains only the essential atmospheric constituents: gaseous methane, which absorbs both longwave and near-infrared radiation, and stratospheric haze, which scatters and absorbs solar flux. Subject to Clausius–Clapeyron scaling of methane vapor pressure together with energy balances at the surface, tropopause, and stratopause, the model links the convective flux to the surface temperature, which exhibits a pronounced maximum due to competing radiative effects of tropospheric methane. As the surface warms, enhanced greenhouse effect would strengthen the convection, whereas the rising anti-greenhouse effect would suppress convection. The resulting convective peak corresponds to MEP, which thus selects a surface temperature slightly above methane’s triple point. To assess its long-term evolution, we consider a 20% dimmer early Sun and a hypothetical 20% enrichment of the oceanic methane. Even in combination, they only cool the surface by ~2 K, in sharp contrast to the ~20 K cooling inferred in studies that prescribe haze abundance. This study suggests a critical role of self-adjusting haze in providing the internal degree of freedom necessary for MEP closure, thereby stabilizing Titan’s temperature. Full article
(This article belongs to the Section Climate and Environment)
Show Figures

Figure 1

43 pages, 5548 KB  
Article
A Novel Probabilistic Model for Streamflow Analysis and Its Role in Risk Management and Environmental Sustainability
by Tassaddaq Hussain, Enrique Villamor, Mohammad Shakil, Mohammad Ahsanullah and Bhuiyan Mohammad Golam Kibria
Axioms 2026, 15(2), 113; https://doi.org/10.3390/axioms15020113 - 4 Feb 2026
Viewed by 747
Abstract
Probabilistic streamflow models play a pivotal role in quantifying hydrological uncertainty and form the backbone of modern risk management strategies for flood and drought forecasting, water allocation planning, and the design of resilient infrastructure. Unlike deterministic approaches that yield single-point estimates, these models [...] Read more.
Probabilistic streamflow models play a pivotal role in quantifying hydrological uncertainty and form the backbone of modern risk management strategies for flood and drought forecasting, water allocation planning, and the design of resilient infrastructure. Unlike deterministic approaches that yield single-point estimates, these models provide a spectrum of possible outcomes, enabling a more realistic assessment of extreme events and supporting informed, sustainable water resource decisions. By explicitly accounting for natural variability and uncertainty, probabilistic models promote transparent, robust, and equitable risk evaluations, helping decision-makers balance economic costs, societal benefits, and environmental protection for long-term sustainability. In this study, we introduce the bounded half-logistic distribution (BHLD), a novel heavy-tailed probability model constructed using the T–Y method for distribution generation, where T denotes a transformer distribution and Y represents a baseline generator. Although the BHLD is conceptually related to the Pareto and log-logistic families, it offers several distinctive advantages for streamflow modeling, including a flexible hazard rate that can be unimodal or monotonically decreasing, a finite lower bound, and closed-form expressions for key risk measures such as Value at Risk (VaR) and Tail Value at Risk (TVaR). The proposed distribution is defined on a lower-bounded domain, allowing it to realistically capture physical constraints inherent in flood processes, while a log-logistic-based tail structure provides the flexibility needed to model extreme hydrological events. Moreover, the BHLD is analytically characterized through a governing differential equation and further examined via its characteristic function and the maximum entropy principle, ensuring stable and efficient parameter estimation. It integrates a half-logistic generator with a log-logistic baseline, yielding a power-law tail decay governed by the parameter β, which is particularly effective for representing extreme flows. Fundamental properties, including the hazard rate function, moments, and entropy measures, are derived in closed form, and model parameters are estimated using the maximum likelihood method. Applied to four real streamflow data sets, the BHLD demonstrates superior performance over nine competing distributions in goodness-of-fit analyses, with notable improvements in tail representation. The model facilitates accurate computation of hydrological risk metrics such as VaR, TVaR, and tail variance, uncovering pronounced temporal variations in flood risk and establishing the BHLD as a powerful and reliable tool for streamflow modeling under changing environmental conditions. Full article
(This article belongs to the Special Issue Probability Theory and Stochastic Processes: Theory and Applications)
Show Figures

Figure 1

30 pages, 616 KB  
Article
Structural Preservation in Time Series Through Multiscale Topological Features Derived from Persistent Homology
by Luiz Carlos de Jesus, Francisco Fernández-Navarro and Mariano Carbonero-Ruz
Mathematics 2026, 14(3), 538; https://doi.org/10.3390/math14030538 - 2 Feb 2026
Viewed by 789
Abstract
A principled, model-agnostic framework for structural feature extraction in time series is presented, grounded in topological data analysis (TDA). The motivation stems from two gaps identified in the literature: First, compact and interpretable representations that summarise the global geometric organisation of trajectories across [...] Read more.
A principled, model-agnostic framework for structural feature extraction in time series is presented, grounded in topological data analysis (TDA). The motivation stems from two gaps identified in the literature: First, compact and interpretable representations that summarise the global geometric organisation of trajectories across scales remain scarce. Second, a unified, task-agnostic protocol for evaluating structure preservation against established non-topological families is still missing. To address these gaps, time-delay embeddings are employed to reconstruct phase space, sliding windows are used to generate local point clouds, and Vietoris–Rips persistent homology (up to dimension two) is computed. The resulting persistence diagrams are summarised with three transparent descriptors—persistence entropy, maximum persistence amplitude, and feature counts—and concatenated across delays and window sizes to yield a multiscale representation designed to complement temporal and spectral features while remaining computationally tractable. A unified experimental design is specified in which heterogeneous, regularly sampled financial series are preprocessed on native calendars and contrasted with competitive baselines spanning lagged, calendar-driven, difference/change, STL-based, delay-embedding PCA, price-based statistical, signature (FRUITS), and network-derived (NetF) features. Structure preservation is assessed through complementary criteria that probe spectral similarity, variance-scaled reconstruction fidelity, and the conservation of distributional shape (location, scale, asymmetry, tails). The study is positioned as an evaluation of representations, rather than a forecasting benchmark, emphasising interpretability, comparability, and methodological transparency while outlining avenues for adaptive hyperparameter selection and alternative filtrations. Full article
Show Figures

Figure 1

41 pages, 8251 KB  
Article
Trade-Off Between Entropy and Gini Index in Income Distribution
by Demetris Koutsoyiannis and G.-Fivos Sargentis
Entropy 2026, 28(1), 35; https://doi.org/10.3390/e28010035 - 26 Dec 2025
Viewed by 1194
Abstract
We investigate the fundamental trade-off between entropy and the Gini index within income distributions, employing a stochastic framework to expose deficiencies in conventional inequality metrics. Anchored in the principle of maximum entropy (ME), we position entropy as a key marker of societal robustness, [...] Read more.
We investigate the fundamental trade-off between entropy and the Gini index within income distributions, employing a stochastic framework to expose deficiencies in conventional inequality metrics. Anchored in the principle of maximum entropy (ME), we position entropy as a key marker of societal robustness, while the Gini index, identical to the (second-order) K-spread coefficient, captures spread but neglects dynamics in distribution tails. We recommend supplanting Lorenz profiles with simpler graphs such as the odds and probability density functions, and a core set of numerical indicators (K-spread K2/μ, standardized entropy Φμ, and upper and lower tail indices, ξ, ζ) for deeper diagnostics. This approach fuses ME into disparity evaluation, highlighting a path to harmonize fairness with structural endurance. Drawing from percentile records in the World Income Inequality Database from 1947 to 2023, we fit flexible models (Pareto–Burr–Feller, Dagum) and extract K-moments and tail indices. The results unveil a concave frontier: moderate Gini reductions have little effect on entropy, but aggressive equalization incurs steep stability costs. Country-level analyses (Argentina, Brazil, South Africa, Bulgaria) link entropy declines to political ruptures, positioning low entropy as a precursor to instability. On the other hand, analyses based on the core set of indicators for present-day geopolitical powers show that they are positioned in a high stability area. Full article
Show Figures

Figure 1

34 pages, 2633 KB  
Article
Additional Contributions of Thermodynamics to Economics
by Vítor Costa
Economies 2026, 14(1), 6; https://doi.org/10.3390/economies14010006 - 25 Dec 2025
Viewed by 660
Abstract
The contributions of Thermodynamics to Economics have been masterfully pioneered by P. A. Samuelson in his Foundations of Economic Analysis, adapting the Le Châtelier Principle to relate changes in economic variables. Recent contributions have been given, including the economic counterparts of energy, [...] Read more.
The contributions of Thermodynamics to Economics have been masterfully pioneered by P. A. Samuelson in his Foundations of Economic Analysis, adapting the Le Châtelier Principle to relate changes in economic variables. Recent contributions have been given, including the economic counterparts of energy, temperature, reversibility, and irreversibility, the Carnot engine, entropy, entropy generation, and the four Laws of Thermodynamics. Starting from them, toward a more efficient (more perfect) economy, the present work aims at (i) showing the contribution of negotiation to a more perfect economy; (ii) proposing endoreversible economic processes, and evaluating their efficiency at maximum merchandise wealth delivery; (iii) proposing the dynamic economic processes’ analysis based on the Economics analogue of specific heat, closely related to the demand elasticity coefficient; (iv) exploring ways to maximize merchandise wealth delivery instead of maximizing merchandise economic entropy generation (financial value generation) in dynamic processes; and (v) defining and evaluating the Economics analogue of exergy, the maximum potential of economic systems to deliver merchandise wealth. Full article
(This article belongs to the Section Growth, and Natural Resources (Environment + Agriculture))
Show Figures

Figure 1

27 pages, 3196 KB  
Article
Reliability-Based Robust Design Optimization Using Data-Driven Polynomial Chaos Expansion
by Zhaowang Li, Zhaozhan Li, Jufang Jia and Xiangdong He
Machines 2026, 14(1), 20; https://doi.org/10.3390/machines14010020 - 23 Dec 2025
Cited by 2 | Viewed by 696
Abstract
As the complexity of modern engineering systems continues to increase, traditional reliability analysis methods still face challenges regarding computational efficiency and reliability in scenarios where the distribution information of random variables is incomplete and samples are sparse. Therefore, this study develops a data-driven [...] Read more.
As the complexity of modern engineering systems continues to increase, traditional reliability analysis methods still face challenges regarding computational efficiency and reliability in scenarios where the distribution information of random variables is incomplete and samples are sparse. Therefore, this study develops a data-driven polynomial chaos expansion (DD-PCE) model for scenarios with limited samples and applies it to reliability-based robust design optimization (RBRDO). The model directly constructs orthogonal polynomial basis functions from input data by matching statistical moments, thereby avoiding the need for original data or complete statistical information as required by traditional PCE methods. To address the statistical moment estimation bias caused by sparse samples, kernel density estimation (KDE) is employed to augment the data derived from limited samples. Furthermore, to enhance computational efficiency, after determining the DD-PCE coefficients, the first four moments of the DD-PCE are obtained analytically, and reliability is computed based on the maximum entropy principle (MEP), thereby eliminating the additional step of solving reliability as required by traditional PCE methods. The proposed approach is validated through a mechanical structure and five mathematical functions, with RBRDO studies conducted on three typical structures and one practical engineering case. The results demonstrate that, while ensuring computational accuracy, this method saves approximately 90% of the time compared to the Monte Carlo simulation (MCS) method, significantly improving computational efficiency. Full article
(This article belongs to the Section Machine Design and Theory)
Show Figures

Figure 1

25 pages, 4458 KB  
Article
Quantifying Knowledge Production Efficiency with Thermodynamics: A Data-Driven Study of Scientific Concepts
by Artem Chumachenko and Brett Buttliere
Entropy 2026, 28(1), 11; https://doi.org/10.3390/e28010011 - 22 Dec 2025
Viewed by 829
Abstract
We develop a data-driven framework for analyzing how scientific concepts evolve through their empirical in-text frequency distributions in large text corpora. For each concept, the observed distribution is paired with a maximum entropy equilibrium reference, which takes a generalized Boltzmann form determined by [...] Read more.
We develop a data-driven framework for analyzing how scientific concepts evolve through their empirical in-text frequency distributions in large text corpora. For each concept, the observed distribution is paired with a maximum entropy equilibrium reference, which takes a generalized Boltzmann form determined by two measurable statistical moments. Using data from more than 500,000 physics papers (about 13,000 concepts, 2000–2018), we reconstruct the temporal trajectories of the associated MaxEnt parameters and entropy measures, and we identify two characteristic regimes of concept dynamics, stable and driven, separated by a transition point near criticality. Departures from equilibrium are quantified using a residual-information measure that captures how much structure a concept exhibits beyond its equilibrium baseline. To analyze temporal change, we adapt the Hatano–Sasa and Esposito–Van den Broeck decomposition to discrete time and separate maintenance-like contributions from externally driven reorganization. The proposed efficiency indicators describe how concepts sustain or reorganize their informational structure under a finite representational capacity. Together, these elements provide a unified and empirically grounded description of concept evolution in scientific communication, based on equilibrium references, nonequilibrium structure, and informational work. Full article
(This article belongs to the Special Issue The Thermodynamics of Social Processes)
Show Figures

Figure 1

34 pages, 518 KB  
Review
Decision, Inference, and Information: Formal Equivalences Under Active Inference
by Patrick Sweeney, Jaime Ruiz-Serra and Michael S. Harré
Entropy 2026, 28(1), 1; https://doi.org/10.3390/e28010001 - 19 Dec 2025
Viewed by 1825
Abstract
A central challenge in artificial intelligence and cognitive science is identifying a unifying principle that governs inference, learning, and action. Active inference proposes such a principle: the minimization of variational free energy. Advocates of active inference argue that the framework subsumes classical models [...] Read more.
A central challenge in artificial intelligence and cognitive science is identifying a unifying principle that governs inference, learning, and action. Active inference proposes such a principle: the minimization of variational free energy. Advocates of active inference argue that the framework subsumes classical models of optimal behavior—including Bayesian decision theory, resource rationality, optimal control, and reinforcement learning—while also instantiating information-theoretic principles such as rate-distortion theory and maximum entropy. However, the literature outlining these conceptual links remains fragmented, limiting integration across fields. This review develops these connections systematically. We show how these major frameworks admit formal correspondences with expected free energy minimization when expressed in variational form, exposing a shared optimization principle that underlies theories of optimal decision-making and information processing. This synthesis is intended both to orient researchers from other fields who are new to active inference and to clarify foundational assumptions for those already working within the framework. Full article
14 pages, 977 KB  
Article
Maximizing Portfolio Diversification via Weighted Shannon Entropy: Application to the Cryptocurrency Market
by Florentin Șerban and Silvia Dedu
Risks 2025, 13(12), 253; https://doi.org/10.3390/risks13120253 - 18 Dec 2025
Cited by 1 | Viewed by 1352
Abstract
This paper develops a robust portfolio optimization framework that integrates Weighted Shannon Entropy (WSE) into the classical mean–variance paradigm, offering a distribution-free approach to diversification suited for volatile and heavy-tailed markets. While traditional variance-based models are highly sensitive to estimation errors and instability [...] Read more.
This paper develops a robust portfolio optimization framework that integrates Weighted Shannon Entropy (WSE) into the classical mean–variance paradigm, offering a distribution-free approach to diversification suited for volatile and heavy-tailed markets. While traditional variance-based models are highly sensitive to estimation errors and instability in covariance structures—issues that are particularly acute in cryptocurrency markets—entropy provides a structural mechanism for mitigating concentration risk and enhancing resilience under uncertainty. By incorporating informational weights that reflect asset-specific characteristics such as volatility, market capitalization, and liquidity, the WSE model generalizes classical Shannon entropy and allows for more realistic, data-driven diversification profiles. Analytical solutions derived from the maximum entropy principle and Lagrange multipliers yield exponential-form portfolio weights that balance expected return, variance, and diversification. The empirical analysis examines two case studies: a four-asset cryptocurrency portfolio (BTC, ETH, SOL, and BNB) over January–March 2025, and an extended twelve-asset portfolio over April 2024–March 2025 with rolling rebalancing and proportional transaction costs. The results show that WSE portfolios achieve systematically higher entropy scores, more balanced allocations, and improved downside protection relative to both equal-weight and classical mean–variance portfolios. Risk-adjusted metrics confirm these improvements: WSE delivers higher Sharpe ratios and less negative Conditional Value-at-Risk (CVaR), together with reduced overexposure to highly volatile assets. Overall, the findings demonstrate that Weighted Shannon Entropy offers a transparent, flexible, and robust framework for portfolio construction in environments characterized by nonlinear dependencies, structural breaks, and parameter uncertainty. Beyond its empirical performance, the WSE model provides a theoretically grounded bridge between information theory and risk management, with strong potential for applications in algorithmic allocation, index construction, and regulatory settings where diversification and stability are essential. Moreover, the integration of informational weighting schemes highlights the capacity of WSE to incorporate both statistical properties and market microstructure signals, thereby enhancing its practical relevance for real-world investment decision-making. Full article
Show Figures

Figure 1

11 pages, 2187 KB  
Article
Entropy and Minimax Risk Diversification: An Empirical and Simulation Study of Portfolio Optimization
by Hongyu Yang and Zijian Luo
Stats 2025, 8(4), 115; https://doi.org/10.3390/stats8040115 - 11 Dec 2025
Viewed by 803
Abstract
The optimal allocation of funds within a portfolio is a central research focus in finance. Conventional mean-variance models often concentrate a significant portion of funds in a limited number of high-risk assets. To promote diversification, Shannon Entropy is widely applied. This paper develops [...] Read more.
The optimal allocation of funds within a portfolio is a central research focus in finance. Conventional mean-variance models often concentrate a significant portion of funds in a limited number of high-risk assets. To promote diversification, Shannon Entropy is widely applied. This paper develops a portfolio optimization model that incorporates Shannon Entropy alongside a risk diversification principle aimed at minimizing the maximum individual asset risk. The study combines empirical analysis with numerical simulations. First, empirical data are used to assess the theoretical model’s effectiveness and practicality. Second, numerical simulations are conducted to analyze portfolio performance under extreme market scenarios. Specifically, the numerical results indicate that for fixed values of the risk balance coefficient and minimum expected return, the optimal portfolios and their return distributions are similar when the risk is measured by standard deviation, absolute deviation, or standard lower semi-deviation. This suggests that the model exhibits robustness to variations in the risk function, providing a relatively stable investment strategy. Full article
(This article belongs to the Special Issue Robust Statistics in Action II)
Show Figures

Figure 1

15 pages, 1395 KB  
Article
Research on the Performance Degradation Rules of Rolling Bearings Under Discrete Constant Conditions
by Liang Ye, Bo Liang, Jingxuan Pei and Yongcun Cui
Lubricants 2025, 13(12), 520; https://doi.org/10.3390/lubricants13120520 - 30 Nov 2025
Viewed by 473
Abstract
During the service life of bearings in ship systems, they operate under discrete constant operating conditions involving load levels, rotational speeds, and ambient vibrations. Traditional research mostly relies on idealized laboratory conditions of single constant load and constant speed, leading to significant discrepancies [...] Read more.
During the service life of bearings in ship systems, they operate under discrete constant operating conditions involving load levels, rotational speeds, and ambient vibrations. Traditional research mostly relies on idealized laboratory conditions of single constant load and constant speed, leading to significant discrepancies between the derived performance degradation patterns and actual on-site scenarios of marine bearings. In view of this, this study integrates the maximum entropy method (MEM) and Poisson counting principle (PCP) to analyze the variation law of the bearing performance degradation parameter—degradation probability—with changes in load ratio and rotational speed. Temperature gradients and impact vibrations are excluded to align with the actual experimental scope. The research results show that (1) the degradation probability of the optimal vibration performance state of the test bearings exhibits an overall nonlinear increasing trend during operation. (2) For the same time series, the degradation probability increases with the rise in load ratio and enters the non-zero phase earlier (indicating earlier degradation initiation). (3) Except for the 3rd–6th time series at 6000 r/min, the degradation probability within the same time series decreases with increasing rotational speed under discrete constant speed conditions. Full article
(This article belongs to the Special Issue Tribological Characteristics of Bearing System, 3rd Edition)
Show Figures

Figure 1

19 pages, 385 KB  
Article
Thermodynamics of Fluid Elements in the Context of Turbulent Isothermal Self-Gravitating Molecular Clouds in Virial Equilibrium
by Sava D. Donkov, Ivan Zhivkov Stefanov and Valentin Kopchev
Universe 2025, 11(12), 383; https://doi.org/10.3390/universe11120383 - 21 Nov 2025
Viewed by 491
Abstract
In this paper, we continue the study of the thermodynamics of fluid elements in isothermal turbulent self-gravitating systems, presented by molecular clouds. We build the model again on the hypothesis that, locally, the turbulent kinetic energy per fluid element can be substituted for [...] Read more.
In this paper, we continue the study of the thermodynamics of fluid elements in isothermal turbulent self-gravitating systems, presented by molecular clouds. We build the model again on the hypothesis that, locally, the turbulent kinetic energy per fluid element can be substituted for the macro-temperature of a gas of fluid elements. Also, we presume that the cloud has a fractal nature. The virial theorem is applicable to our system too (hence it is in a dynamical equilibrium). But, in contrast to the previous work, where the turbulent kinetic energy clearly dominates over the gravity, in the present paper, we assume that the virial relation 2Ekin+Egrav=0 holds for the entire cloud. Hence, the cloud is a dense and strongly self-gravitating object. On that basis, we calculate the internal and the total energy per fluid element. Writing down the first principle of thermodynamics, we obtain the explicit form of the entropy increment. It demonstrates untypical behavior. In the range 0β<0.4, for the turbulent scaling exponent, the entropy increment is positive, but in the interval 0.4<β1, it is negative, and for βcr=0.4, it is zero. The latter two regimes (negative and zero) cannot be explained from the classical point of view. However, we give some arguments for the reasons for these irregularities, and the main is that our cloud is an open self-organizing system driven by the gravity. Moreover, we study the system for critical points under the conditions of three thermodynamic ensembles: micro-canonical, canonical, and grand canonical. Only the canonical ensemble exhibits a critical point, which is a maximum of the free energy and corresponds to an unstable equilibrium of the system. Analysis of the equilibrium potentials also shows that the system resides in unstable states under all the conditions. We explain these results by prompting the hypothesis that the virialized cloud is in the final unstable state before its contraction and subsequent fragmentation or collapse. Full article
Show Figures

Figure 1

Back to TopTop