Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (408)

Search Parameters:
Keywords = entropy-variance

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 3261 KB  
Article
Adaptive Exploration Proximal Policy Optimization for Efficient Robotic Continuous Control
by Jiajian Li, Mingrui Li and Hanshen Li
Symmetry 2026, 18(5), 717; https://doi.org/10.3390/sym18050717 - 24 Apr 2026
Abstract
Proximal Policy Optimization (PPO) is widely adopted for robotic continuous control, yet it can suffer from insufficient exploration and unstable policy updates in high-dimensional action spaces. This paper proposes Adaptive Exploration Proximal Policy Optimization (AE-PPO), an enhanced PPO framework that integrates (i) adaptive [...] Read more.
Proximal Policy Optimization (PPO) is widely adopted for robotic continuous control, yet it can suffer from insufficient exploration and unstable policy updates in high-dimensional action spaces. This paper proposes Adaptive Exploration Proximal Policy Optimization (AE-PPO), an enhanced PPO framework that integrates (i) adaptive clipping, which adjusts the clipping range according to the observed magnitude of policy updates to better balance stability and learning progress, (ii) adaptive entropy regularization, which schedules the entropy weight across training to maintain effective exploration while avoiding excessive randomness. AE-PPO is evaluated on standard MuJoCo continuous control benchmarks (e.g., Walker2d, HalfCheetah, and Humanoid) and compared with PPO and representative baselines such as Trust Region Policy Optimization (TRPO) and Soft Actor Critic (SAC). The results show that AE-PPO achieves faster convergence and an improved final performance with reduced training variance, demonstrating more stable and efficient learning in challenging high-dimensional tasks. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

41 pages, 699 KB  
Article
Mathematical Framework for Characterizing Emotional Individuality in Large Language Models: Temperature Control, Fuzzy Entropy, and Persona-Based Diversity Analysis
by Naruki Shirahama, Yuma Yoshimoto, Naofumi Nakaya and Satoshi Watanabe
Mathematics 2026, 14(7), 1224; https://doi.org/10.3390/math14071224 - 6 Apr 2026
Viewed by 325
Abstract
Evaluating emotional understanding in Large Language Models (LLMs) is challenging because assessments are subjective, ambiguous, multidimensional, and sensitive to controllable generation parameters. We developed a unified mathematical framework for characterizing LLM “emotional individuality” that integrates softmax sampling–temperature control (the decoding-time temperature parameter exposed [...] Read more.
Evaluating emotional understanding in Large Language Models (LLMs) is challenging because assessments are subjective, ambiguous, multidimensional, and sensitive to controllable generation parameters. We developed a unified mathematical framework for characterizing LLM “emotional individuality” that integrates softmax sampling–temperature control (the decoding-time temperature parameter exposed by the API and typically used to modulate output randomness during token generation), fuzzy set theory with Shannon-type fuzzy entropy, and persona-based cognitive diversity analysis. We evaluated 36 API-accessible LLMs from seven major vendors on Japanese literary texts, using four personas each assigned a sampling temperature (T{0.1,0.4,0.7,0.9}), yielding 4227/4320 trial responses (97.8% coverage), of which 4067/4227 contained valid numeric emotion scores (96.2%). Temperature controllability varied approximately 25-fold (κM[0.039,0.982]) with both positive and negative temperature–variance relationships across models. Because each sampling temperature is deterministically assigned to a persona in our design, κM should be interpreted as an operational temperature–variance association across persona conditions rather than an isolated causal temperature effect. The model-level mean fuzzy entropy ranged from approximately 0.40 to 0.66, and the numerical stability consistency scores ranged from approximately 0.548 to 0.780. We also observed text-dependent structure, including genre-specific variation in the Interest–Sadness relationship. For practitioners, the framework is most directly useful as a benchmark-design and model-screening template for structured emotion-scoring tasks; its empirical conclusions remain limited to the present Japanese literary, text-only setting. Full article
Show Figures

Figure 1

22 pages, 10589 KB  
Article
An Improved Fault Diagnosis Method for Diesel Engines Based on Optimized Variational Mode Decomposition and Transformer-SVM
by Xiaoxin Ma, Shuyao Tian, Xianbiao Zhan, Hao Yan and Kaibo Cui
Processes 2026, 14(7), 1131; https://doi.org/10.3390/pr14071131 - 31 Mar 2026
Viewed by 272
Abstract
Due to the non-stationary and nonlinear characteristics of diesel engine vibration signals, fault features cannot be fully extracted, which limits fault diagnosis performance. To address this issue, an improved fault diagnosis method combining optimized Variational Mode Decomposition with a Transformer and Support Vector [...] Read more.
Due to the non-stationary and nonlinear characteristics of diesel engine vibration signals, fault features cannot be fully extracted, which limits fault diagnosis performance. To address this issue, an improved fault diagnosis method combining optimized Variational Mode Decomposition with a Transformer and Support Vector Machine is proposed. An improved dung beetle optimization algorithm is employed to obtain optimal parameters for Variational Mode Decomposition. The envelope entropy minimization principle is applied to select the optimal intrinsic mode functions after Variational Mode Decomposition, achieving signal denoising. Analysis of variance is integrated for feature significance testing to screen critical features. The selected features are fed into a Transformer network for training. At the final classification stage, the traditional SoftMax classifier is replaced with a Support Vector Machine classifier. Full article
(This article belongs to the Special Issue AI-Driven Safe and High-Quality Development in Process Industries)
Show Figures

Figure 1

19 pages, 8328 KB  
Article
A Robust 3D Active Learning Framework Based on Multi-Metric Voting for Fast Electromagnetic Field Reconstruction with Sparse Sampling
by Yidi Hu, Kuiyuan Wang, Yujie Qi, Jiewen Deng, Kai Zhang, Zhi Tang, Lei Zhang and Tianwu Li
Electronics 2026, 15(7), 1434; https://doi.org/10.3390/electronics15071434 - 30 Mar 2026
Viewed by 304
Abstract
To mitigate the high measurement costs in electromagnetic compatibility (EMC) assessment, this paper proposes a robust active learning framework for fast 3D field reconstruction with sparse sampling. A novel “Four-Vote” query criterion is proposed to guide intelligent sample selection, which integrates Shannon entropy, [...] Read more.
To mitigate the high measurement costs in electromagnetic compatibility (EMC) assessment, this paper proposes a robust active learning framework for fast 3D field reconstruction with sparse sampling. A novel “Four-Vote” query criterion is proposed to guide intelligent sample selection, which integrates Shannon entropy, committee variance, spatial density, and clustering-based representativeness, all derived from a heterogeneous radial basis function (RBF) committee. Furthermore, an adaptive polynomial degree adjustment mechanism is implemented to ensure stability in data-scarce 3D environments. Validated through full-wave HFSS simulations, the proposed method significantly outperforms traditional sampling strategies in both 2D and 3D scenarios, achieving high-fidelity field reconstruction with minimal sampling points. This framework provides an efficient solution for rapid spatial field mapping and EMC fault diagnosis in practical engineering scenarios. Full article
Show Figures

Figure 1

52 pages, 51167 KB  
Article
Detection and Comparative Evaluation of Noise Perturbations in Simulated Dynamical Systems and ECG Signals Using Complexity-Based Features
by Kevin Mallinger, Sebastian Raubitzek, Sebastian Schrittwieser and Edgar Weippl
Mach. Learn. Knowl. Extr. 2026, 8(4), 85; https://doi.org/10.3390/make8040085 - 25 Mar 2026
Viewed by 361
Abstract
Noise contamination is a common challenge in the analysis of time series data, where stochastic perturbations can obscure deterministic dynamics and complicate the interpretation of signals from chaotic and physiological systems. Reliable identification of noise regimes and their intensity is therefore essential for [...] Read more.
Noise contamination is a common challenge in the analysis of time series data, where stochastic perturbations can obscure deterministic dynamics and complicate the interpretation of signals from chaotic and physiological systems. Reliable identification of noise regimes and their intensity is therefore essential for robust analysis of dynamical and biomedical signals, where incorrect attribution of stochastic perturbations can lead to misleading interpretations of system behavior. For this reason, the present study examines the role of complexity-based descriptors for identifying stochastic perturbations in time series and analyzes how these metrics respond to different noise regimes across heterogeneous dynamical systems. A supervised learning approach based on complexity descriptors was developed to analyze controlled perturbations in multiple signal types. Gaussian, pink, and low-frequency noise disturbances were injected at predefined intensity levels into the Rössler and Lorenz chaotic systems, the Hénon map, and synthetic electrocardiogram signals, while AR(1) processes were used for validation on inherently stochastic signals. From these systems, eighteen entropy-based, fractal, statistical, and singular value decomposition-based complexity metrics were extracted from either raw signals or reconstructed phase spaces. These features were used to perform three classification tasks that capture different aspects of noise characterization, including detecting the presence of noise, identifying the perturbation type, and discriminating between different noise intensities. In addition to predictive modeling, the study evaluates the complexity profiles and feature relevance of the metrics under varying perturbation regimes. The results show that no single complexity metric consistently discriminates noise regimes across all systems. Instead, system-specific relevance patterns emerge. Under given experimental constraints (data partitioning, machine learning algorithm, etc.), Approximate Entropy provides the strongest discrimination for the Lorenz system and the Hénon map, the Coefficient of Variation, Sample and Permutation Entropy dominate classification for ECG signals, and the Condition Number and Variance of first derivative together with Fisher Information are most informative for the Rössler system. Across all datasets, the proposed framework achieves an average accuracy of 99% for noise presence detection, 98.4% for noise type classification, and 98.5% for noise intensity classification. These findings demonstrate that complexity metrics capture structural and statistical signatures of stochastic perturbations across a diverse set of dynamic systems. Full article
Show Figures

Figure 1

28 pages, 2584 KB  
Article
Improving Cross-Domain Generalization in Brain MRIs via Feature Space Stability Regularization
by Shawon Chakrabarty Kakon, Harishik Dev Singh Jamwal and Saurabh Singh
Mathematics 2026, 14(6), 1082; https://doi.org/10.3390/math14061082 - 23 Mar 2026
Viewed by 526
Abstract
Deep learning models for brain tumor classification from magnetic resonance imaging (MRI) often achieve high in-dataset accuracy but exhibit substantial performance degradation when evaluated on unseen clinical data due to domain shift arising from variations in imaging protocols and intensity distributions. Existing approaches [...] Read more.
Deep learning models for brain tumor classification from magnetic resonance imaging (MRI) often achieve high in-dataset accuracy but exhibit substantial performance degradation when evaluated on unseen clinical data due to domain shift arising from variations in imaging protocols and intensity distributions. Existing approaches largely rely on architectural scaling or parameter-level regularization, which do not explicitly constrain the stability of learned feature representations. This manuscript proposes Feature Space Stability Regularization (FSSR), a lightweight and model-agnostic training framework that enforces consistency in latent feature representations under realistic, MRI-safe-intensity perturbations. FSSR introduces an auxiliary feature space loss that minimizes the 2 distance between normalized embeddings extracted from the input MRI images and their intensity-perturbed counterparts, alongside standard cross-entropy supervision. This manuscript evaluated FSSR across three convolutional backbones, ResNet-18, ResNet-34, and DenseNet-121, trained exclusively on the Kaggle Brain MRI dataset. Feature space analysis demonstrates that FSSR consistently reduces mean feature deviation and variance across architectures, indicating more stable internal representations. Generalization is assessed via zero-shot evaluation on the fully unseen BRISC-2025 dataset without retraining or fine-tuning. On the source domain, the best-performing configuration achieves 97.71% accuracy and 97.55% macro-F1. Under domain shift, FSSR improves external accuracy by up to 8.20 percentage points and the macro-F1 by up to 12.50 percentage points, with DenseNet-121 achieving a 96.70% accuracy and 96.87% macro-F1 at a domain gap of only 0.94%. Confusion matrix analysis further reveals the reduced class confusion and more stable recall across challenging tumor categories, demonstrating that feature-level stability is a key factor for robust brain MRI classification under domain shift. Full article
Show Figures

Figure 1

31 pages, 629 KB  
Article
The One-Parameter Bounded p-Exponential Distribution: Properties, Inference, and Applications
by Hassan S. Bakouch, Hugo S. Salinas, Fernando A. Moala, Tassaddaq Hussain, Shaykhah Aldossari and Alanwood Al-Buainain
Mathematics 2026, 14(6), 1076; https://doi.org/10.3390/math14061076 - 22 Mar 2026
Viewed by 425
Abstract
We introduce the one-parameter bounded p-exponential distribution on (0, p+1), which includes the uniform model as a special case and converges pointwise to the exponential law as p. Closed-form expressions are derived [...] Read more.
We introduce the one-parameter bounded p-exponential distribution on (0, p+1), which includes the uniform model as a special case and converges pointwise to the exponential law as p. Closed-form expressions are derived for the CDF and PDF, the survival function, an explicit increasing-failure-rate hazard function, the quantile function (enabling inversion-based simulation), moments, and entropy, along with a constructive scaled beta or Kumaraswamy representation. We also establish stochastic ordering with respect to p in stop-loss and increasing convex order, formalizing how dispersion varies with the parameter while preserving the mean scale. Inference is discussed under parameter-dependent support, a non-regular setting, and we develop and compare several estimation procedures, including a likelihood-based boundary MLE, a variance-matching method-of-moments estimator, and Bayesian estimation under a gamma prior implemented via numerical quadrature or MCMC. Monte Carlo simulation studies evaluate finite-sample performance and interval behavior, and two real-world applications in survival and reliability analysis illustrate competitive goodness-of-fit relative to standard benchmark models. Full article
(This article belongs to the Special Issue New Advances in Mathematical Applications for Reliability Analysis)
Show Figures

Figure 1

28 pages, 1479 KB  
Article
Double-Edged Sword of Diversification: Commodities and African Equity Indices in Robust vs. Optimal Portfolio Strategies
by Anaclet K. Kitenge, John W. M. Mwamba and Jules C. Mba
Econometrics 2026, 14(1), 15; https://doi.org/10.3390/econometrics14010015 - 16 Mar 2026
Viewed by 468
Abstract
This study empirically investigates a central tension in quantitative finance: the divergence between theoretically optimal and robust portfolio construction under real-world estimation uncertainty. Using a dynamic, time-varying optimization framework, we compare the performance of three distinct strategies: the Maximum Sharpe ratio (P1), Minimum [...] Read more.
This study empirically investigates a central tension in quantitative finance: the divergence between theoretically optimal and robust portfolio construction under real-world estimation uncertainty. Using a dynamic, time-varying optimization framework, we compare the performance of three distinct strategies: the Maximum Sharpe ratio (P1), Minimum Variance (P2), and Maximum Entropy (P3) portfolios, with and without commodity proxy inclusion (gold and oil) in a multi-asset universe featuring prominent African equity indices. Our key finding challenges classical theory: the robust Maximum Entropy portfolio (P3) achieved superior realized risk-adjusted returns (Sharpe ratio: 1.164) compared to the theoretically optimal Maximum Sharpe portfolio (P1, Sharpe: 0.788). This result validates the “estimation-error maximization” critique, as P1’s performance was undermined by its sensitivity to noisy inputs. Conversely, the Minimum Variance portfolio (P2) successfully fulfilled its objective, achieving the lowest volatility (~5%) at the cost of modest returns (3.01–3.64%), illustrating the classic risk–return trade-off. Euler decomposition revealed that even this low-volatility portfolio exhibited significant concentration risk, with over 40% of its risk attributable to just three assets. The role of commodities is proven to be strategy contingent. They significantly enhanced returns and the Sharpe ratio for the aggressive P1 but were marginally detrimental to the robust P3. African market indices played specialized roles: Egypt and Nigeria acted as return drivers in P1, Morocco became a major risk contributor within the concentrated P2 strategy, and South Africa provided key diversification in the well-balanced P3. Ultimately, the study demonstrates that portfolio risk is determined more by asset concentration and diversification quality than by geographic labels, and that robust diversification methodologies outperform fragile theoretical optima in practice. We conclude that portfolio construction must prioritize robustness to estimation error and explicit risk-balancing to ensure stable, real-world performance. Full article
Show Figures

Figure 1

25 pages, 598 KB  
Article
Study on an Enterprise Resilience Evaluation Model for Listed Real Estate Companies Based on the Entropy-Weighted TOPSIS Method
by Baojing Zhang, Yan Zheng, Dongqi Xie and Yipeng Zheng
Mathematics 2026, 14(6), 987; https://doi.org/10.3390/math14060987 - 14 Mar 2026
Viewed by 468
Abstract
In the context of a deep structural adjustment of China’s real estate sector and heightened macroeconomic uncertainty, quantitatively assessing the resilience of listed real estate enterprises is crucial for preventing systemic risk and promoting sustainable development. This paper proposes a multidimensional resilience evaluation [...] Read more.
In the context of a deep structural adjustment of China’s real estate sector and heightened macroeconomic uncertainty, quantitatively assessing the resilience of listed real estate enterprises is crucial for preventing systemic risk and promoting sustainable development. This paper proposes a multidimensional resilience evaluation framework for 37 Chinese A-share listed real estate firms using panel data from 2017–2024. An index system covering four dimensions—solvency and liquidity, profitability and cash flow, operational efficiency and asset structure, and growth and value—is constructed on the basis of financial ratios. The entropy-weighted TOPSIS method is employed to derive a composite resilience index, while principal component analysis (PCA) provides a complementary robustness check of the rankings. The empirical results indicate that (1) operational efficiency and asset structure receive the highest objective weight, followed by solvency and liquidity, whereas the weights of profitability, cash flow, and growth–value dimensions are relatively lower; at the indicator level, accounts receivable turnover, inventory turnover and the cash-to-short-term-debt ratio play a leading role, underscoring the central importance of liquidity safety and asset turnover under the “three red lines” regulatory regime. (2) Firms such as Shahe Co., Shenzhen, China, Huafa Co., Zhuhai, China and Wantong Development, Beijing, China exhibit persistently higher resilience scores, characterized by lower leverage, stronger cash buffers and faster operating turnover, whereas firms such as Yunnan Metropolitan Investment, Kunming, China, Greenland Holdings, Shanghai, China, Bright Real Estate, Shanghai, China and Rongsheng Development, Langfang, China remain at the lower tail of the resilience distribution with high leverage, tight liquidity and volatile profitability. (3) The resilience rankings obtained from entropy-weighted TOPSIS and PCA are positively and significantly correlated at the 1% level, suggesting a moderate level of consistency between distance-based and variance-based evaluation schemes. Building on these findings, this paper proposes resilience-oriented policy recommendations for regulators and managers in terms of differentiated prudential regulation, capital-structure and debt-maturity optimization, operational efficiency enhancement, and the integration of digital transformation and ESG governance. Full article
(This article belongs to the Special Issue Application of Multiple Criteria Decision Analysis)
Show Figures

Figure 1

30 pages, 1036 KB  
Article
Classical and Bayesian Inference for the Two-Parameter Chen Distribution with Random Censored Data
by Zihan Zhao, Wenhao Gui, Minghui Liu and Lanxi Zhang
Axioms 2026, 15(3), 213; https://doi.org/10.3390/axioms15030213 - 12 Mar 2026
Viewed by 327
Abstract
This study explores classical and Bayesian estimation for the two-parameter Chen distribution with randomly censored data, where censoring times follow an independent two-parameter Chen distribution with separate shape and scale parameters. We first derive the maximum likelihood estimators of the unknown parameters, together [...] Read more.
This study explores classical and Bayesian estimation for the two-parameter Chen distribution with randomly censored data, where censoring times follow an independent two-parameter Chen distribution with separate shape and scale parameters. We first derive the maximum likelihood estimators of the unknown parameters, together with their asymptotic variances and credible intervals, and further adopt the method of moments, L-moments and least squares methods for classical estimation. Under the generalized entropy loss function and inverse gamma priors, Bayesian estimation is implemented via Gibbs sampling, with the highest posterior density credible intervals of parameters constructed accordingly. We also investigate the estimation of key reliability and lifetime characteristics of the distribution, and conduct Monte Carlo simulations to compare the performance of all aforementioned estimation methods. Finally, two real-world CMAPSS jet engine lifetime datasets from NASA are applied to validate the practical effectiveness of the proposed estimation approaches, demonstrating the enhanced flexibility of the Chen distribution compared to the exponential distribution in fitting aerospace-related censored data, given the marginal p-values in the K-S tests. Full article
(This article belongs to the Special Issue New Perspectives in Mathematical Statistics, 2nd Edition)
Show Figures

Figure 1

39 pages, 2921 KB  
Article
Reasoning-Enhanced Query–Service Matching: A Large Language Model Approach with Adaptive Scoring and Diversity Optimization
by Yue Xiang, Jing Lu, Jinqian Wei and Yaowen Hu
Mathematics 2026, 14(6), 950; https://doi.org/10.3390/math14060950 - 11 Mar 2026
Viewed by 424
Abstract
Query–service matching in customer service systems faces a critical challenge of accurately aligning user queries expressed in colloquial language with formally defined services while balancing business objectives. Traditional keyword-based and embedding approaches fail to capture complex semantic nuances and cannot provide interpretable explanations. [...] Read more.
Query–service matching in customer service systems faces a critical challenge of accurately aligning user queries expressed in colloquial language with formally defined services while balancing business objectives. Traditional keyword-based and embedding approaches fail to capture complex semantic nuances and cannot provide interpretable explanations. We address this problem by proposing a novel reasoning-enhanced framework that leverages large language models (LLMs) for structured multi-criteria evaluation. Our key innovation is a reasoning-first scoring architecture where the model generates detailed explanations before numerical scores, reducing score variance by 18% through conditional mutual information. We introduce a controlled stochastic perturbation mechanism with theoretically derived optimal parameters that balance diversity and relevance, alongside a knowledge distillation pipeline enabling 960× model compression (480B→0.5B parameters) while retaining 94% performance. Rigorous theoretical analysis establishes Pareto optimality guarantees for multi-criteria evaluation, information-theoretic entropy reduction bounds, and PAC learning guarantees for distillation. Experimental validation on real-world telecommunications data demonstrates 89% Precision@1 (15.3% improvement over baselines), 23% diversity enhancement, and 96× latency reduction, with deployment cost decreasing 1200× compared to direct LLM inference. This work bridges the gap between LLM capabilities and production deployment requirements through principled mathematical foundations and practical system design. Full article
(This article belongs to the Special Issue Industrial Improvement with AI in Applied Mathematics)
Show Figures

Figure 1

34 pages, 12105 KB  
Article
A Hybrid MIL Architecture for Multi-Class Classification of Bacterial Microscopic Images
by Aisulu Ismailova, Gulbanu Yessenbayeva, Kuanysh Kadirkulov, Raushan Moldasheva, Elmira Eldarova, Gulnaz Zhilkishbayeva, Shynar Kodanova, Shynar Yelezhanova, Valentina Makhatova and Alexander Nedzved
Computers 2026, 15(3), 180; https://doi.org/10.3390/computers15030180 - 10 Mar 2026
Viewed by 463
Abstract
This paper addresses the problem of multi-class classification of bacterial microscopic images using a rigorous experimental protocol designed to prevent information leakage and improve performance. The dataset consists of 2034 images representing 33 taxa, organized by class. Data integrity checks confirmed the absence [...] Read more.
This paper addresses the problem of multi-class classification of bacterial microscopic images using a rigorous experimental protocol designed to prevent information leakage and improve performance. The dataset consists of 2034 images representing 33 taxa, organized by class. Data integrity checks confirmed the absence of corrupted or unreadable files. To formalize image characteristics and ensure quality control, indirect geometric and textural features were calculated, including minimum frame size, brightness statistics (mean and standard deviation), Shannon entropy, Laplace variance, and Sobel gradient energy. Quality checks revealed a small proportion of images with extreme brightness (2.5074%), while no samples with critically low sharpness according to the selected criteria were detected. Statistical analysis of interclass differences using the Kruskal–Wallis test with multiple comparison correction demonstrated the high discriminatory power of texture features, specifically gradient energy (ε2 = 0.819987) and Laplace variance (ε2 = 0.709904). Feature correlations were consistent with their physical interpretation, revealing a strong positive relationship between sharpness and gradient energy. Principal component analysis confirmed a strong structural pattern, with the first two components explaining 75.5766% of the total variance. For a unified comparison, classical machine learning, transfer learning, and modern deep architectures were evaluated within a single protocol. Full article
(This article belongs to the Special Issue Machine Learning: Innovation, Implementation, and Impact)
Show Figures

Graphical abstract

26 pages, 871 KB  
Article
TimesNet-BFT: Mitigating Network State Uncertainty in Byzantine Consensus via Deep Temporal Modeling
by Haolong Wang, Haijun Liu, Yahui Liu, Hongliang Ma and Pan Gao
Entropy 2026, 28(3), 302; https://doi.org/10.3390/e28030302 - 8 Mar 2026
Viewed by 503
Abstract
Byzantine fault tolerance (BFT) protocols serve as the cornerstone of data consistency in permissioned blockchains; however, their scalability is inherently constrained by stochastic leader-centric bottlenecks and rigid, non-adaptive timeout mechanisms. Existing rule-based heuristics often fail to capture high-entropy and time-varying network latency, leading [...] Read more.
Byzantine fault tolerance (BFT) protocols serve as the cornerstone of data consistency in permissioned blockchains; however, their scalability is inherently constrained by stochastic leader-centric bottlenecks and rigid, non-adaptive timeout mechanisms. Existing rule-based heuristics often fail to capture high-entropy and time-varying network latency, leading to frequent view changes and severe performance degradation under network volatility. To mitigate this epistemic uncertainty, this paper proposes TimesNet-BFT, a novel entropy-aware optimization framework. By leveraging TimesNet’s transformation of one-dimensional time series into two-dimensional tensors for multi-periodicity analysis, the framework accurately characterizes stochastic nodal latency patterns to facilitate entropy-minimized dynamic leader election and adaptive timeout strategies. Extensive evaluations conducted on simulated and real-world trace-driven Internet of Vehicles (IoV) scenarios validate the proposed approach, achieving a prediction MAPE below 5% alongside robust zero-shot generalization. Notably, under high-entropy network conditions, the framework demonstrates up to a 191.9% increase in throughput and mitigates latency variance by 73.3%, effectively neutralizing the structural bottlenecks inherent to traditional information-agnostic protocols. Crucially, by mathematically decoupling consensus safety from AI prediction errors, the system introduces an aggressive liveness paradigm that maintains minimal control plane overhead while significantly enhancing the entropic stability of the consensus process. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

27 pages, 687 KB  
Article
Chaotic Scaling and Network Turbulence in Crude Oil-Equity Systems Using a Coupled Multiscale Chaos Index
by Arash Sioofy Khoojine, Lin Xiao, Hao Chen and Congyin Wang
Int. J. Financial Stud. 2026, 14(3), 63; https://doi.org/10.3390/ijfs14030063 - 3 Mar 2026
Viewed by 385
Abstract
Financial markets often display nonlinear and turbulent dynamics during periods of stress, and crude-oil and global equity systems frequently demonstrate closely connected forms of instability. Earlier studies report multifractality, chaotic features and regime-dependent spillovers across commodities and equities, yet existing approaches rarely succeed [...] Read more.
Financial markets often display nonlinear and turbulent dynamics during periods of stress, and crude-oil and global equity systems frequently demonstrate closely connected forms of instability. Earlier studies report multifractality, chaotic features and regime-dependent spillovers across commodities and equities, yet existing approaches rarely succeed in capturing both the intrinsic complexity of oil-market behavior and the changing structure of cross-asset dependence. This limitation reduces the ability to distinguish calm from turbulent regimes and weakens short-horizon risk assessment. The present study introduces a unified framework that quantifies and predicts systemic instability within the coupled oil–equity system. The analysis constructs a crude-oil complexity index based on multifractal fluctuation analysis, permutation and approximate entropy, and Lyapunov-based indicators of chaotic dynamics. At the same time, it develops an information-theoretic network of global equity and energy-sector returns and summarizes its instability through measures of edge turnover, spectral radius, degree entropy and strength dispersion. These components are combined to form the Coupled Multiscale Chaos Index (CMCI), a scalar state variable that distinguishes calm, transitional and chaotic market regimes. Empirical results indicate that Brent and WTI exhibit pronounced multifractality, elevated entropy and positive Lyapunov exponents, while the dependence network becomes more centralized, more clustered and more capable of shock amplification during high-CMCI states. The CMCI moves closely with realized volatility and provides significant predictive content for five-day variance across major global equity benchmarks, with performance superior to models that rely only on macro-financial controls. Out-of-sample evaluation shows that forecasts incorporating measures of complexity record substantially lower MSE and QLIKE losses. The findings indicate that systemic instability reflects the interaction between local chaotic dynamics in crude-oil markets and turbulence in the global dependence network. The CMCI offers a practical early-warning indicator that supports risk management, forecasting and macroprudential supervision. Full article
Show Figures

Figure 1

18 pages, 339 KB  
Article
Entropy-Based Portfolio Optimization in Cryptocurrency Markets: A Unified Maximum Entropy Framework
by Silvia Dedu and Florentin Șerban
Entropy 2026, 28(3), 285; https://doi.org/10.3390/e28030285 - 2 Mar 2026
Viewed by 551
Abstract
Traditional mean–variance portfolio optimization proves inadequate for cryptocurrency markets, where extreme volatility, fat-tailed return distributions, and unstable correlation structures undermine the validity of variance as a comprehensive risk measure. To address these limitations, this paper proposes a unified entropy-based portfolio optimization framework grounded [...] Read more.
Traditional mean–variance portfolio optimization proves inadequate for cryptocurrency markets, where extreme volatility, fat-tailed return distributions, and unstable correlation structures undermine the validity of variance as a comprehensive risk measure. To address these limitations, this paper proposes a unified entropy-based portfolio optimization framework grounded in the Maximum Entropy Principle (MaxEnt). Within this setting, Shannon entropy, Tsallis entropy, and Weighted Shannon Entropy (WSE) are formally derived as particular specifications of a common constrained optimization problem solved via the method of Lagrange multipliers, ensuring analytical coherence and mathematical transparency. Moreover, the proposed MaxEnt formulation provides an information-theoretic interpretation of portfolio diversification as an inference problem under uncertainty, where optimal allocations correspond to the least informative distributions consistent with prescribed moment constraints. In this perspective, entropy acts as a structural regularizer that governs the geometry of diversification rather than as a direct proxy for risk. This interpretation strengthens the conceptual link between entropy, uncertainty quantification, and decision-making in complex financial systems, offering a robust and distribution-free alternative to classical variance-based portfolio optimization. The proposed framework is empirically illustrated using a portfolio composed of major cryptocurrencies—Bitcoin (BTC), Ethereum (ETH), Solana (SOL), and Binance Coin (BNB)—based on weekly return data. The results reveal systematic differences in the diversification behavior induced by each entropy measure: Shannon entropy favors near-uniform allocations, Tsallis entropy imposes stronger penalties on concentration and enhances robustness to tail risk, while WSE enables the incorporation of asset-specific informational weights reflecting heterogeneous market characteristics. From a theoretical perspective, the paper contributes a coherent MaxEnt formulation that unifies several entropy measures within a single information-theoretic optimization framework, clarifying the role of entropy as a structural regularizer of diversification. From an applied standpoint, the results indicate that entropy-based criteria yield stable and interpretable allocations across turbulent market regimes, offering a flexible alternative to classical risk-based portfolio construction. The framework naturally extends to dynamic multi-period settings and alternative entropy formulations, providing a foundation for future research on robust portfolio optimization under uncertainty. Full article
Back to TopTop