Next Issue
Volume 27, August
Previous Issue
Volume 27, June
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 27, Issue 7 (July 2025) – 117 articles

Cover Story (view full-size image): This picture is taken from a molecular dynamics study of a water molecule that either scatters or sticks and then desorbs from a graphite surface [Markovic, N.; Poulsen, J. J. Phys Chem A, 112, 1701-1711 (2008)]. All 303 atoms were quantized using a Wigner distribution based on the so-called gradient sampling implementation of the Feynman–Kleinert effective frequency theory, described in this review. It was found that the sticking lifetimes of water molecules were markedly reduced by the quantization of the atoms, making them more energetic through the inclusion of quantum zero-point motion. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
25 pages, 10024 KiB  
Article
Forecasting with a Bivariate Hysteretic Time Series Model Incorporating Asymmetric Volatility and Dynamic Correlations
by Hong Thi Than
Entropy 2025, 27(7), 771; https://doi.org/10.3390/e27070771 - 21 Jul 2025
Viewed by 220
Abstract
This study explores asymmetric volatility structures within multivariate hysteretic autoregressive (MHAR) models that incorporate conditional correlations, aiming to flexibly capture the dynamic behavior of global financial assets. The proposed framework integrates regime switching and time-varying delays governed by a hysteresis variable, enabling the [...] Read more.
This study explores asymmetric volatility structures within multivariate hysteretic autoregressive (MHAR) models that incorporate conditional correlations, aiming to flexibly capture the dynamic behavior of global financial assets. The proposed framework integrates regime switching and time-varying delays governed by a hysteresis variable, enabling the model to account for both asymmetric volatility and evolving correlation patterns over time. We adopt a fully Bayesian inference approach using adaptive Markov chain Monte Carlo (MCMC) techniques, allowing for the joint estimation of model parameters, Value-at-Risk (VaR), and Marginal Expected Shortfall (MES). The accuracy of VaR forecasts is assessed through two standard backtesting procedures. Our empirical analysis involves both simulated data and real-world financial datasets to evaluate the model’s effectiveness in capturing downside risk dynamics. We demonstrate the application of the proposed method on three pairs of daily log returns involving the S&P500, Bank of America (BAC), Intercontinental Exchange (ICE), and Goldman Sachs (GS), present the results obtained, and compare them against the original model framework. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

16 pages, 386 KiB  
Article
State Space Correspondence and Cross-Entropy Methods in the Assessment of Bidirectional Cardiorespiratory Coupling in Heart Failure
by Beatrice Cairo, Riccardo Pernice, Nikola N. Radovanović, Luca Faes, Alberto Porta and Mirjana M. Platiša
Entropy 2025, 27(7), 770; https://doi.org/10.3390/e27070770 - 20 Jul 2025
Viewed by 299
Abstract
The complex interplay between the cardiac and the respiratory systems, termed cardiorespiratory coupling (CRC), is a bidirectional phenomenon that can be affected by pathologies such as heart failure (HF). In the present work, the potential changes in strength of directional CRC were assessed [...] Read more.
The complex interplay between the cardiac and the respiratory systems, termed cardiorespiratory coupling (CRC), is a bidirectional phenomenon that can be affected by pathologies such as heart failure (HF). In the present work, the potential changes in strength of directional CRC were assessed in HF patients classified according to their cardiac rhythm via two measures of coupling based on k-nearest neighbor (KNN) estimation approaches, cross-entropy (CrossEn) and state space correspondence (SSC), applied on the heart period (HP) and respiratory (RESP) variability series, while also accounting for the complexity of the cardiac and respiratory rhythms. We tested the measures on 25 HF patients with sinus rhythm (SR, age: 58.9 ± 9.7 years; 23 males) and 41 HF patients with ventricular arrhythmia (VA, age 62.2 ± 11.0 years; 30 males). A predominant directionality of interaction from the cardiac to the respiratory rhythm was observed in both cohorts and using both methodologies, with similar statistical power, while a lower complexity for the RESP series compared to HP series was observed in the SR cohort. We conclude that CrossEn and SSC can be considered strictly related to each other when using a KNN technique for the estimation of the cross-predictability markers. Full article
(This article belongs to the Special Issue Entropy Methods for Cardiorespiratory Coupling Analysis)
Show Figures

Figure 1

22 pages, 437 KiB  
Article
ApproximateSecret Sharing in Field of Real Numbers
by Jiaqi Wan, Ziyue Wang, Yongqiang Yu and Xuehu Yan
Entropy 2025, 27(7), 769; https://doi.org/10.3390/e27070769 - 20 Jul 2025
Viewed by 166
Abstract
In the era of big data, the security of information encryption systems has garnered extensive attention, particularly in critical domains such as financial transactions and medical data management. While traditional Shamir’s Secret Sharing (SSS) ensures secure integer sharing through threshold cryptography, it exhibits [...] Read more.
In the era of big data, the security of information encryption systems has garnered extensive attention, particularly in critical domains such as financial transactions and medical data management. While traditional Shamir’s Secret Sharing (SSS) ensures secure integer sharing through threshold cryptography, it exhibits inherent limitations when applied to floating-point domains and high-precision numerical scenarios. To address these issues, this paper proposes an innovative algorithm to optimize SSS via type-specific coding for real numbers. By categorizing real numbers into four types—rational numbers, special irrationals, common irrationals, and general irrationals—our approach achieves lossless transmission for rational numbers, special irrationals, and common irrationals, while enabling low-loss recovery for general irrationals. The scheme leverages a type-coding system to embed data category identifiers in polynomial coefficients, combined with Bernoulli-distributed random bit injection to enhance security. The experimental results validate its effectiveness in balancing precision and security across various real-number types. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

12 pages, 493 KiB  
Article
Exploring Non-Gaussianity Reduction in Quantum Channels
by Micael Andrade Dias and Francisco Marcos de Assis
Entropy 2025, 27(7), 768; https://doi.org/10.3390/e27070768 - 20 Jul 2025
Viewed by 218
Abstract
The quantum relative entropy between a quantum state and its Gaussian equivalent is a quantifying function of the system’s non-Gaussianity, a useful resource in several applications, such as quantum communication and computation. One of its most fundamental properties is to be monotonically decreasing [...] Read more.
The quantum relative entropy between a quantum state and its Gaussian equivalent is a quantifying function of the system’s non-Gaussianity, a useful resource in several applications, such as quantum communication and computation. One of its most fundamental properties is to be monotonically decreasing under Gaussian evolutions. In this paper, we develop the conditions for a non-Gaussian quantum channel to preserve the monotonically decreasing property. We propose a necessary condition to classify between Gaussian and non-Gaussian channels and use it to define a class of quantum channels that decrease the system’s non-Gaussianity. We also discuss how this property, combined with a restriction on the states at the channel’s input, can be applied to the security analysis of continuous-variable quantum key distribution protocols. Full article
Show Figures

Figure 1

16 pages, 1538 KiB  
Article
A Quantum-like Approach to Semantic Text Classification
by Anastasia S. Gruzdeva, Rodion N. Iurev, Igor A. Bessmertny, Andrei Y. Khrennikov and Alexander P. Alodjants
Entropy 2025, 27(7), 767; https://doi.org/10.3390/e27070767 - 19 Jul 2025
Viewed by 204
Abstract
In this work, we conduct a sentiment analysis of English-language reviews using a quantum-like (wave-based) model of text representation. This model is explored as an alternative to machine learning (ML) techniques for text classification and analysis tasks. Special attention is given to the [...] Read more.
In this work, we conduct a sentiment analysis of English-language reviews using a quantum-like (wave-based) model of text representation. This model is explored as an alternative to machine learning (ML) techniques for text classification and analysis tasks. Special attention is given to the problem of segmenting text into semantic units, and we illustrate how the choice of segmentation algorithm is influenced by the structure of the language. We investigate the impact of quantum-like semantic interference on classification accuracy and compare the results with those obtained using classical probabilistic methods. Our findings show that accounting for interference effects improves accuracy by approximately 15%. We also explore methods for reducing the computational cost of algorithms based on the wave model of text representation. The results demonstrate that the quantum-like model can serve as a viable alternative or complement to traditional ML approaches. The model achieves classification precision and recall scores of around 0.8. Furthermore, the classification algorithm is readily amenable to optimization: the proposed procedure reduces the estimated computational complexity from O(n2) to O(n). Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

8 pages, 296 KiB  
Communication
Equivalence of Informations Characterizes Bregman Divergences
by Philip S. Chodrow
Entropy 2025, 27(7), 766; https://doi.org/10.3390/e27070766 - 19 Jul 2025
Viewed by 202
Abstract
Bregman divergences form a class of distance-like comparison functions which plays fundamental roles in optimization, statistics, and information theory. One important property of Bregman divergences is that they generate agreement between two useful formulations of information content (in the sense of variability or [...] Read more.
Bregman divergences form a class of distance-like comparison functions which plays fundamental roles in optimization, statistics, and information theory. One important property of Bregman divergences is that they generate agreement between two useful formulations of information content (in the sense of variability or non-uniformity) in weighted collections of vectors. The first of these is the Jensen gap information, which measures the difference between the mean value of a strictly convex function evaluated on a weighted set of vectors and the value of that function evaluated at the centroid of that collection. The second of these is the divergence information, which measures the mean divergence of the vectors in the collection from their centroid. In this brief note, we prove that the agreement between Jensen gap and divergence informations in fact characterizes the class of Bregman divergences; they are the only divergences that generate this agreement for arbitrary weighted sets of data vectors. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

20 pages, 459 KiB  
Article
Post-Quantum Secure Multi-Factor Authentication Protocol for Multi-Server Architecture
by Yunhua Wen, Yandong Su and Wei Li
Entropy 2025, 27(7), 765; https://doi.org/10.3390/e27070765 - 18 Jul 2025
Viewed by 193
Abstract
The multi-factor authentication (MFA) protocol requires users to provide a combination of a password, a smart card and biometric data as verification factors to gain access to the services they need. In a single-server MFA system, users accessing multiple distinct servers must register [...] Read more.
The multi-factor authentication (MFA) protocol requires users to provide a combination of a password, a smart card and biometric data as verification factors to gain access to the services they need. In a single-server MFA system, users accessing multiple distinct servers must register separately for each server, manage multiple smart cards, and remember numerous passwords. In contrast, an MFA system designed for multi-server architecture allows users to register once at a registration center (RC) and then access all associated servers with a single smart card and one password. MFA with an offline RC addresses the computational bottleneck and single-point failure issues associated with the RC. In this paper, we propose a post-quantum secure MFA protocol for a multi-server architecture with an offline RC. Our MFA protocol utilizes the post-quantum secure Kyber key encapsulation mechanism and an information-theoretically secure fuzzy extractor as its building blocks. We formally prove the post-quantum semantic security of our MFA protocol under the real or random (ROR) model in the random oracle paradigm. Compared to related protocols, our protocol achieves higher efficiency and maintains reasonable communication overhead. Full article
Show Figures

Figure 1

20 pages, 7353 KiB  
Article
Comparative Analysis of Robust Entanglement Generation in Engineered XX Spin Chains
by Eduardo K. Soares, Gentil D. de Moraes Neto and Fabiano M. Andrade
Entropy 2025, 27(7), 764; https://doi.org/10.3390/e27070764 - 18 Jul 2025
Viewed by 242
Abstract
We present a numerical investigation comparing two entanglement generation protocols in finite XX spin chains with varying spin magnitudes (s=1/2,1,3/2). Protocol 1 (P1) relies on staggered couplings to steer correlations toward [...] Read more.
We present a numerical investigation comparing two entanglement generation protocols in finite XX spin chains with varying spin magnitudes (s=1/2,1,3/2). Protocol 1 (P1) relies on staggered couplings to steer correlations toward the ends of the chain. At the same time, Protocol 2 (P2) adopts a dual-port architecture that uses optimized boundary fields to mediate virtual excitations between terminal spins. Our results show that P2 consistently outperforms P1 in all spin values, generating higher-fidelity entanglement in shorter timescales when evaluated under the same system parameters. Furthermore, P2 exhibits superior robustness under realistic imperfections, including diagonal and off-diagonal disorder, as well as dephasing noise. To further assess the resilience of both protocols in experimentally relevant settings, we employ the pseudomode formalism to characterize the impact of non-Markovian noise on the entanglement dynamics. Our analysis reveals that the dual-port mechanism (P2) remains effective even when memory effects are present, as it reduces the excitation of bulk modes that would otherwise enhance environment-induced backflow. Together, the scalability, efficiency, and noise resilience of the dual-port approach position it as a promising framework for entanglement distribution in solid-state quantum information platforms. Full article
(This article belongs to the Special Issue Entanglement in Quantum Spin Systems)
Show Figures

Figure 1

20 pages, 3787 KiB  
Article
Enhancing Robustness of Variational Data Assimilation in Chaotic Systems: An α-4DVar Framework with Rényi Entropy and α-Generalized Gaussian Distributions
by Yuchen Luo, Xiaoqun Cao, Kecheng Peng, Mengge Zhou and Yanan Guo
Entropy 2025, 27(7), 763; https://doi.org/10.3390/e27070763 - 18 Jul 2025
Viewed by 212
Abstract
Traditional 4-dimensional variational data assimilation methods have limitations due to the Gaussian distribution assumption of observation errors, and the gradient of the objective functional is vulnerable to observation noise and outliers. To address these issues, this paper proposes a non-Gaussian nonlinear data assimilation [...] Read more.
Traditional 4-dimensional variational data assimilation methods have limitations due to the Gaussian distribution assumption of observation errors, and the gradient of the objective functional is vulnerable to observation noise and outliers. To address these issues, this paper proposes a non-Gaussian nonlinear data assimilation method called α-4DVar, based on Rényi entropy and the α-generalized Gaussian distribution. By incorporating the heavy-tailed property of Rényi entropy, the objective function and its gradient suitable for non-Gaussian errors are derived, and numerical experiments are conducted using the Lorenz-63 model. Experiments are conducted with Gaussian and non-Gaussian errors as well as different initial guesses to compare the assimilation effects of traditional 4DVar and α-4DVar. The results show that α-4DVar performs as well as traditional method without observational errors. Its analysis field is closer to the truth, with RMSE rapidly dropping to a low level and remaining stable, particularly under non-Gaussian errors. Under different initial guesses, the RMSE of both the background and analysis fields decreases quickly and stabilizes. In conclusion, the α-4DVar method demonstrates significant advantages in handling non-Gaussian observational errors, robustness against noise, and adaptability to various observational conditions, thus offering a more reliable and effective solution for data assimilation. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

18 pages, 821 KiB  
Article
Joint Iterative Decoding Design of Cooperative Downlink SCMA Systems
by Hao Cheng, Min Zhang and Ruoyu Su
Entropy 2025, 27(7), 762; https://doi.org/10.3390/e27070762 - 18 Jul 2025
Viewed by 207
Abstract
Sparse code multiple access (SCMA) has been a competitive multiple access candidate for future communication networks due to its superiority in spectrum efficiency and providing massive connectivity. However, cell edge users may suffer from great performance degradations due to signal attenuation. Therefore, a [...] Read more.
Sparse code multiple access (SCMA) has been a competitive multiple access candidate for future communication networks due to its superiority in spectrum efficiency and providing massive connectivity. However, cell edge users may suffer from great performance degradations due to signal attenuation. Therefore, a cooperative downlink SCMA system is proposed to improve transmission reliability. To the best of our knowledge, multiuser detection is still an open issue for this cooperative downlink SCMA system. To this end, we propose a joint iterative decoding design of the cooperative downlink SCMA system by using the joint factor graph stemming from direct and relay transmission. The closed form bit-error rate (BER) performance of the cooperative downlink SCMA system is also derived. Simulation results verify that the proposed cooperative downlink SCMA system performs better than the non-cooperative one. Full article
(This article belongs to the Special Issue Wireless Communications: Signal Processing Perspectives, 2nd Edition)
Show Figures

Figure 1

14 pages, 3176 KiB  
Article
Impact of Data Distribution and Bootstrap Setting on Anomaly Detection Using Isolation Forest in Process Quality Control
by Hyunyul Choi and Kihyo Jung
Entropy 2025, 27(7), 761; https://doi.org/10.3390/e27070761 - 18 Jul 2025
Viewed by 272
Abstract
This study investigates the impact of data distribution and bootstrap resampling on the anomaly detection performance of the Isolation Forest (iForest) algorithm in statistical process control. Although iForest has received attention for its multivariate and ensemble-based nature, its performance under non-normal data distributions [...] Read more.
This study investigates the impact of data distribution and bootstrap resampling on the anomaly detection performance of the Isolation Forest (iForest) algorithm in statistical process control. Although iForest has received attention for its multivariate and ensemble-based nature, its performance under non-normal data distributions and varying bootstrap settings remains underexplored. To address this gap, a comprehensive simulation was performed across 18 scenarios involving log-normal, gamma, and t-distributions with different mean shift levels and bootstrap configurations. The results show that iForest substantially outperforms the conventional Hotelling’s T2 control chart, especially in non-Gaussian settings and under small-to-medium process shifts. Enabling bootstrap resampling led to marginal improvements across classification metrics, including accuracy, precision, recall, F1-score, and average run length (ARL)1. However, a key limitation of iForest was its reduced sensitivity to subtle process changes, such as a 1σ mean shift, highlighting an area for future enhancement. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

18 pages, 1438 KiB  
Article
Maximum Entropy Estimates of Hubble Constant from Planck Measurements
by David P. Knobles and Mark F. Westling
Entropy 2025, 27(7), 760; https://doi.org/10.3390/e27070760 - 16 Jul 2025
Viewed by 985
Abstract
A maximum entropy (ME) methodology was used to infer the Hubble constant from the temperature anisotropies in cosmic microwave background (CMB) measurements, as measured by the Planck satellite. A simple cosmological model provided physical insight and afforded robust statistical sampling of a parameter [...] Read more.
A maximum entropy (ME) methodology was used to infer the Hubble constant from the temperature anisotropies in cosmic microwave background (CMB) measurements, as measured by the Planck satellite. A simple cosmological model provided physical insight and afforded robust statistical sampling of a parameter space. The parameter space included the spectral tilt and amplitude of adiabatic density fluctuations of the early universe and the present-day ratios of dark energy, matter, and baryonic matter density. A statistical temperature was estimated by applying the equipartition theorem, which uniquely specifies a posterior probability distribution. The ME analysis inferred the mean value of the Hubble constant to be about 67 km/sec/Mpc with a conservative standard deviation of approximately 4.4 km/sec/Mpc. Unlike standard Bayesian analyses that incorporate specific noise models, the ME approach treats the model error generically, thereby producing broader, but less assumption-dependent, uncertainty bounds. The inferred ME value lies within 1σ of both early-universe estimates (Planck, Dark Energy Signal Instrument (DESI)) and late-universe measurements (e.g., the Chicago Carnegie Hubble Program (CCHP)) using redshift data collected from the James Webb Space Telescope (JWST). Thus, the ME analysis does not appear to support the existence of the Hubble tension. Full article
(This article belongs to the Special Issue Insight into Entropy)
Show Figures

Figure 1

20 pages, 2382 KiB  
Article
Heterogeneity-Aware Personalized Federated Neural Architecture Search
by An Yang and Ying Liu
Entropy 2025, 27(7), 759; https://doi.org/10.3390/e27070759 - 16 Jul 2025
Viewed by 267
Abstract
Federated learning (FL), which enables collaborative learning across distributed nodes, confronts a significant heterogeneity challenge, primarily including resource heterogeneity induced by different hardware platforms, and statistical heterogeneity originating from non-IID private data distributions among clients. Neural architecture search (NAS), particularly one-shot NAS, holds [...] Read more.
Federated learning (FL), which enables collaborative learning across distributed nodes, confronts a significant heterogeneity challenge, primarily including resource heterogeneity induced by different hardware platforms, and statistical heterogeneity originating from non-IID private data distributions among clients. Neural architecture search (NAS), particularly one-shot NAS, holds great promise for automatically designing optimal personalized models tailored to such heterogeneous scenarios. However, the coexistence of both resource and statistical heterogeneity destabilizes the training of the one-shot supernet, impairs the evaluation of candidate architectures, and ultimately hinders the discovery of optimal personalized models. To address this problem, we propose a heterogeneity-aware personalized federated NAS (HAPFNAS) method. First, we leverage lightweight knowledge models to distill knowledge from clients to server-side supernet, thereby effectively mitigating the effects of heterogeneity and enhancing the training stability. Then, we build random-forest-based personalized performance predictors to enable the efficient evaluation of candidate architectures across clients. Furthermore, we develop a model-heterogeneous FL algorithm called heteroFedAvg to facilitate collaborative model training for the discovered personalized models. Comprehensive experiments on CIFAR-10/100 and Tiny-ImageNet classification datasets demonstrate the effectiveness of our HAPFNAS, compared to state-of-the-art federated NAS methods. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

16 pages, 862 KiB  
Article
Random Search Walks Inside Absorbing Annuli
by Anderson S. Bibiano-Filho, Jandson F. O. de Freitas, Marcos G. E. da Luz, Gandhimohan M. Viswanathan and Ernesto P. Raposo
Entropy 2025, 27(7), 758; https://doi.org/10.3390/e27070758 - 15 Jul 2025
Viewed by 218
Abstract
We revisit the problem of random search walks in the two-dimensional (2D) space between concentric absorbing annuli, in which a searcher performs random steps until finding either the inner or the outer ring. By considering step lengths drawn from a power-law distribution, we [...] Read more.
We revisit the problem of random search walks in the two-dimensional (2D) space between concentric absorbing annuli, in which a searcher performs random steps until finding either the inner or the outer ring. By considering step lengths drawn from a power-law distribution, we obtain the exact analytical result for the search efficiency η in the ballistic limit, as well as an approximate expression for η in the regime of searches starting far away from both rings, and the scaling behavior of η for very small initial distances to the inner ring. Our numerical results show good overall agreement with the theoretical findings. We also analyze numerically the absorbing probabilities related to the encounter of the inner and outer rings and the associated Shannon entropy. The power-law exponent marking the crossing of such probabilities (equiprobability) and the maximum entropy condition grows logarithmically with the starting distance. Random search walks inside absorbing annuli are relevant, since they represent a mean-field approach to conventional random searches in 2D, which is still an open problem with important applications in various fields. Full article
(This article belongs to the Special Issue Transport in Complex Environments)
Show Figures

Figure 1

33 pages, 1024 KiB  
Article
Graph-Theoretic Limits of Distributed Computation: Entropy, Eigenvalues, and Chromatic Numbers
by Mohammad Reza Deylam Salehi and Derya Malak
Entropy 2025, 27(7), 757; https://doi.org/10.3390/e27070757 - 15 Jul 2025
Viewed by 290
Abstract
We address the problem of the distributed computation of arbitrary functions of two correlated sources, X1 and X2, residing in two distributed source nodes, respectively. We exploit the structure of a computation task by coding source characteristic graphs (and multiple [...] Read more.
We address the problem of the distributed computation of arbitrary functions of two correlated sources, X1 and X2, residing in two distributed source nodes, respectively. We exploit the structure of a computation task by coding source characteristic graphs (and multiple instances using the n-fold OR product of this graph with itself). For regular graphs and general graphs, we establish bounds on the optimal rate—characterized by the chromatic entropy for the n-fold graph products—that allows a receiver for asymptotically lossless computation of arbitrary functions over finite fields. For the special class of cycle graphs (i.e., 2-regular graphs), we establish an exact characterization of chromatic numbers and derive bounds on the required rates. Next, focusing on the more general class of d-regular graphs, we establish connections between d-regular graphs and expansion rates for n-fold graph products using graph spectra. Finally, for general graphs, we leverage the Gershgorin Circle Theorem (GCT) to provide a characterization of the spectra, which allows us to derive new bounds on the optimal rate. Our codes leverage the spectra of the computation and provide a graph expansion-based characterization to succinctly capture the computation structure, providing new insights into the problem of distributed computation of arbitrary functions. Full article
(This article belongs to the Special Issue Information Theory and Data Compression)
Show Figures

Figure 1

41 pages, 1006 KiB  
Article
A Max-Flow Approach to Random Tensor Networks
by Khurshed Fitter, Faedi Loulidi and Ion Nechita
Entropy 2025, 27(7), 756; https://doi.org/10.3390/e27070756 - 15 Jul 2025
Viewed by 207
Abstract
The entanglement entropy of a random tensor network (RTN) is studied using tools from free probability theory. Random tensor networks are simple toy models that help in understanding the entanglement behavior of a boundary region in the anti-de Sitter/conformal field theory (AdS/CFT) context. [...] Read more.
The entanglement entropy of a random tensor network (RTN) is studied using tools from free probability theory. Random tensor networks are simple toy models that help in understanding the entanglement behavior of a boundary region in the anti-de Sitter/conformal field theory (AdS/CFT) context. These can be regarded as specific probabilistic models for tensors with particular geometry dictated by a graph (or network) structure. First, we introduce a model of RTN obtained by contracting maximally entangled states (corresponding to the edges of the graph) on the tensor product of Gaussian tensors (corresponding to the vertices of the graph). The entanglement spectrum of the resulting random state is analyzed along a given bipartition of the local Hilbert spaces. The limiting eigenvalue distribution of the reduced density operator of the RTN state is provided in the limit of large local dimension. This limiting value is described through a maximum flow optimization problem in a new graph corresponding to the geometry of the RTN and the given bipartition. In the case of series-parallel graphs, an explicit formula for the limiting eigenvalue distribution is provided using classical and free multiplicative convolutions. The physical implications of these results are discussed, allowing the analysis to move beyond the semiclassical regime without any cut assumption, specifically in terms of finite corrections to the average entanglement entropy of the RTN. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

31 pages, 2957 KiB  
Article
Nash Equilibria in Four-Strategy Quantum Extensions of the Prisoner’s Dilemma Game
by Piotr Frąckiewicz, Anna Gorczyca-Goraj, Krzysztof Grzanka, Katarzyna Nowakowska and Marek Szopa
Entropy 2025, 27(7), 755; https://doi.org/10.3390/e27070755 - 15 Jul 2025
Viewed by 228
Abstract
The concept of Nash equilibria in pure strategies for quantum extensions of the general form of the Prisoner’s Dilemma game is investigated. The process of quantization involves incorporating two additional unitary strategies, which effectively expand the classical game. We consider five classes of [...] Read more.
The concept of Nash equilibria in pure strategies for quantum extensions of the general form of the Prisoner’s Dilemma game is investigated. The process of quantization involves incorporating two additional unitary strategies, which effectively expand the classical game. We consider five classes of such quantum games, which remain invariant under isomorphic transformations of the classical game. The resulting Nash equilibria are found to be more closely aligned with Pareto-optimal solutions than those of the conventional Nash equilibrium outcome of the classical game. Our results demonstrate the complexity and diversity of strategic behavior in the quantum setting, providing new insights into the dynamics of classical decision-making dilemmas. In particular, we provide a detailed characterization of strategy profiles and their corresponding Nash equilibria, thereby extending the understanding of quantum strategies’ impact on traditional game-theoretical problems. Full article
Show Figures

Figure 1

21 pages, 7084 KiB  
Article
Chinese Paper-Cutting Style Transfer via Vision Transformer
by Chao Wu, Yao Ren, Yuying Zhou, Ming Lou and Qing Zhang
Entropy 2025, 27(7), 754; https://doi.org/10.3390/e27070754 - 15 Jul 2025
Viewed by 306
Abstract
Style transfer technology has seen substantial attention in image synthesis, notably in applications like oil painting, digital printing, and Chinese landscape painting. However, it is often difficult to generate migrated images that retain the essence of paper-cutting art and have strong visual appeal [...] Read more.
Style transfer technology has seen substantial attention in image synthesis, notably in applications like oil painting, digital printing, and Chinese landscape painting. However, it is often difficult to generate migrated images that retain the essence of paper-cutting art and have strong visual appeal when trying to apply the unique style of Chinese paper-cutting art to style transfer. Therefore, this paper proposes a new method for Chinese paper-cutting style transformation based on the Transformer, aiming at realizing the efficient transformation of Chinese paper-cutting art styles. Specifically, the network consists of a frequency-domain mixture block and a multi-level feature contrastive learning module. The frequency-domain mixture block explores spatial and frequency-domain interaction information, integrates multiple attention windows along with frequency-domain features, preserves critical details, and enhances the effectiveness of style conversion. To further embody the symmetrical structures and hollowed hierarchical patterns intrinsic to Chinese paper-cutting, the multi-level feature contrastive learning module is designed based on a contrastive learning strategy. This module maximizes mutual information between multi-level transferred features and content features, improves the consistency of representations across different layers, and thus accentuates the unique symmetrical aesthetics and artistic expression of paper-cutting. Extensive experimental results demonstrate that the proposed method outperforms existing state-of-the-art approaches in both qualitative and quantitative evaluations. Additionally, we created a Chinese paper-cutting dataset that, although modest in size, represents an important initial step towards enriching existing resources. This dataset provides valuable training data and a reference benchmark for future research in this field. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

21 pages, 877 KiB  
Article
Identity-Based Provable Data Possession with Designated Verifier from Lattices for Cloud Computing
by Mengdi Zhao and Huiyan Chen
Entropy 2025, 27(7), 753; https://doi.org/10.3390/e27070753 - 15 Jul 2025
Viewed by 186
Abstract
Provable data possession (PDP) is a technique that enables the verification of data integrity in cloud storage without the need to download the data. PDP schemes are generally categorized into public and private verification. Public verification allows third parties to assess the integrity [...] Read more.
Provable data possession (PDP) is a technique that enables the verification of data integrity in cloud storage without the need to download the data. PDP schemes are generally categorized into public and private verification. Public verification allows third parties to assess the integrity of outsourced data, offering good openness and flexibility, but it may lead to privacy leakage and security risks. In contrast, private verification restricts the auditing capability to the data owner, providing better privacy protection but often resulting in higher verification costs and operational complexity due to limited local resources. Moreover, most existing PDP schemes are based on classical number-theoretic assumptions, making them vulnerable to quantum attacks. To address these challenges, this paper proposes an identity-based PDP with a designated verifier over lattices, utilizing a specially leveled identity-based fully homomorphic signature (IB-FHS) scheme. We provide a formal security proof of the proposed scheme under the small-integer solution (SIS) and learning with errors (LWE) within the random oracle model. Theoretical analysis confirms that the scheme achieves security guarantees while maintaining practical feasibility. Furthermore, simulation-based experiments show that for a 1 MB file and lattice dimension of n = 128, the computation times for core algorithms such as TagGen, GenProof, and CheckProof are approximately 20.76 s, 13.75 s, and 3.33 s, respectively. Compared to existing lattice-based PDP schemes, the proposed scheme introduces additional overhead due to the designated verifier mechanism; however, it achieves a well-balanced optimization among functionality, security, and efficiency. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

17 pages, 583 KiB  
Article
Cross-Domain Feature Enhancement-Based Password Guessing Method for Small Samples
by Cheng Liu, Junrong Li, Xiheng Liu, Bo Li, Mengsu Hou, Wei Yu, Yujun Li and Wenjun Liu
Entropy 2025, 27(7), 752; https://doi.org/10.3390/e27070752 - 15 Jul 2025
Viewed by 208
Abstract
As a crucial component of account protection system evaluation and intrusion detection, the advancement of password guessing technology encounters challenges due to its reliance on password data. In password guessing research, there is a conflict between the traditional models’ need for large training [...] Read more.
As a crucial component of account protection system evaluation and intrusion detection, the advancement of password guessing technology encounters challenges due to its reliance on password data. In password guessing research, there is a conflict between the traditional models’ need for large training samples and the limitations on accessing password data imposed by privacy protection regulations. Consequently, security researchers often struggle with the issue of having a very limited password set from which to guess. This paper introduces a small-sample password guessing technique that enhances cross-domain features. It analyzes the password set using probabilistic context-free grammar (PCFG) to create a list of password structure probabilities and a dictionary of password fragment probabilities, which are then used to generate a password set structure vector. The method calculates the cosine similarity between the small-sample password set B from the target area and publicly leaked password sets Ai using the structure vector, identifying the set Amax with the highest similarity. This set is then utilized as a training set, where the features of the small-sample password set are enhanced by modifying the structure vectors of the training set. The enhanced training set is subsequently employed for PCFG password generation. The paper uses hit rate as the evaluation metric, and Experiment I reveals that the similarity between B and Ai can be reliably measured when the size of B exceeds 150. Experiment II confirms the hypothesis that a higher similarity between Ai and B leads to a greater hit rate of Ai on the test set of B, with potential improvements of up to 32% compared to training with B alone. Experiment III demonstrates that after enhancing the features of Amax, the hit rate for the small-sample password set can increase by as much as 10.52% compared to previous results. This method offers a viable solution for small-sample password guessing without requiring prior knowledge. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

17 pages, 300 KiB  
Article
Commitment Schemes from OWFs with Applications to Quantum Oblivious Transfer
by Thomas Lorünser, Sebastian Ramacher and Federico Valbusa
Entropy 2025, 27(7), 751; https://doi.org/10.3390/e27070751 - 15 Jul 2025
Viewed by 196
Abstract
Commitment schemes (CSs) are essential to many cryptographic protocols and schemes with applications that include privacy-preserving computation on data, privacy-preserving authentication, and, in particular, oblivious transfer protocols. For quantum oblivious transfer (qOT) protocols, unconditionally binding commitment schemes that do not rely on hardness [...] Read more.
Commitment schemes (CSs) are essential to many cryptographic protocols and schemes with applications that include privacy-preserving computation on data, privacy-preserving authentication, and, in particular, oblivious transfer protocols. For quantum oblivious transfer (qOT) protocols, unconditionally binding commitment schemes that do not rely on hardness assumptions from structured mathematical problems are required. These additional constraints severely limit the choice of commitment schemes to random oracle-based constructions or Naor’s bit commitment scheme. As these protocols commit to individual bits, the use of such commitment schemes comes at a high bandwidth and computational cost. In this work, we investigate improvements to the efficiency of commitment schemes used in qOT protocols and propose an extension of Naor’s commitment scheme requiring the existence of one-way functions (OWFs) to reduce communication complexity for 2-bit strings. Additionally, we provide an interactive string commitment scheme with preprocessing to enable the fast and efficient computation of commitments. Full article
(This article belongs to the Special Issue Information-Theoretic Cryptography and Security)
Show Figures

Figure 1

129 pages, 6810 KiB  
Review
Statistical Mechanics of Linear k-mer Lattice Gases: From Theory to Applications
by Julian Jose Riccardo, Pedro Marcelo Pasinetti, Jose Luis Riccardo and Antonio Jose Ramirez-Pastor
Entropy 2025, 27(7), 750; https://doi.org/10.3390/e27070750 - 14 Jul 2025
Viewed by 220
Abstract
The statistical mechanics of structured particles with arbitrary size and shape adsorbed onto discrete lattices presents a longstanding theoretical challenge, mainly due to complex spatial correlations and entropic effects that emerge at finite densities. Even for simplified systems such as hard-core linear k [...] Read more.
The statistical mechanics of structured particles with arbitrary size and shape adsorbed onto discrete lattices presents a longstanding theoretical challenge, mainly due to complex spatial correlations and entropic effects that emerge at finite densities. Even for simplified systems such as hard-core linear k-mers, exact solutions remain limited to low-dimensional or highly constrained cases. In this review, we summarize the main theoretical approaches developed by our research group over the past three decades to describe adsorption phenomena involving linear k-mers—also known as multisite occupancy adsorption—on regular lattices. We examine modern approximations such as an extension to two dimensions of the exact thermodynamic functions obtained in one dimension, the Fractional Statistical Theory of Adsorption based on Haldane’s fractional statistics, and the so-called Occupation Balance based on expansion of the reciprocal of the fugacity, and hybrid approaches such as the semi-empirical model obtained by combining exact one-dimensional calculations and the Guggenheim–DiMarzio approach. For interacting systems, statistical thermodynamics is explored within generalized Bragg–Williams and quasi-chemical frameworks. Particular focus is given to the recently proposed Multiple Exclusion statistics, which capture the correlated exclusion effects inherent to non-monomeric particles. Applications to monolayer and multilayer adsorption are analyzed, with relevance to hydrocarbon separation technologies. Finally, computational strategies, including advanced Monte Carlo techniques, are reviewed in the context of high-density regimes. This work provides a unified framework for understanding entropic and cooperative effects in lattice-adsorbed polyatomic systems and highlights promising directions for future theoretical and computational research. Full article
(This article belongs to the Special Issue Statistical Mechanics of Lattice Gases)
Show Figures

Figure 1

17 pages, 1348 KiB  
Article
A Revised Bimodal Generalized Extreme Value Distribution: Theory and Climate Data Application
by Cira E. G. Otiniano, Mathews N. S. Lisboa and Terezinha K. A. Ribeiro
Entropy 2025, 27(7), 749; https://doi.org/10.3390/e27070749 - 14 Jul 2025
Viewed by 167
Abstract
The bimodal generalized extreme value (BGEV) distribution was first introduced in 2023. This distribution offers greater flexibility than the generalized extreme value (GEV) distribution for modeling extreme and heterogeneous (bimodal) events. However, applying this model requires a data-centering technique, as it lacks a [...] Read more.
The bimodal generalized extreme value (BGEV) distribution was first introduced in 2023. This distribution offers greater flexibility than the generalized extreme value (GEV) distribution for modeling extreme and heterogeneous (bimodal) events. However, applying this model requires a data-centering technique, as it lacks a location parameter. In this work, we investigate the properties of the BGEV distribution as redefined in 2024, which incorporates a location parameter, thereby enhancing its flexibility in practical applications. We derive explicit expressions for the probability density, the hazard rate, and the quantile function. Furthermore, we establish the identifiability property of this new class of BGEV distributions and compute expressions for the moments, the moment-generating function, and entropy. The applicability of the new model is illustrated using climate data. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

35 pages, 4030 KiB  
Article
An Exergy-Enhanced Improved IGDT-Based Optimal Scheduling Model for Electricity–Hydrogen Urban Integrated Energy Systems
by Min Xie, Lei Qing, Jia-Nan Ye and Yan-Xuan Lu
Entropy 2025, 27(7), 748; https://doi.org/10.3390/e27070748 - 13 Jul 2025
Viewed by 212
Abstract
Urban integrated energy systems (UIESs) play a critical role in facilitating low-carbon and high-efficiency energy transitions. However, existing scheduling strategies predominantly focus on energy quantity and cost, often neglecting the heterogeneity of energy quality across electricity, heat, gas, and hydrogen. This paper presents [...] Read more.
Urban integrated energy systems (UIESs) play a critical role in facilitating low-carbon and high-efficiency energy transitions. However, existing scheduling strategies predominantly focus on energy quantity and cost, often neglecting the heterogeneity of energy quality across electricity, heat, gas, and hydrogen. This paper presents an exergy-enhanced stochastic optimization framework for the optimal scheduling of electricity–hydrogen urban integrated energy systems (EHUIESs) under multiple uncertainties. By incorporating exergy efficiency evaluation into a Stochastic Optimization–Improved Information Gap Decision Theory (SOI-IGDT) framework, the model dynamically balances economic cost with thermodynamic performance. A penalty-based iterative mechanism is introduced to track exergy deviations and guide the system toward higher energy quality. The proposed approach accounts for uncertainties in renewable output, load variation, and Hydrogen-enriched compressed natural gas (HCNG) combustion. Case studies based on a 186-bus UIES coupled with a 20-node HCNG network show that the method improves exergy efficiency by up to 2.18% while maintaining cost robustness across varying confidence levels. These results underscore the significance of integrating exergy into real-time robust optimization for resilient and high-quality energy scheduling. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

13 pages, 5099 KiB  
Article
Effect of Grain Size Distribution on Frictional Wear and Corrosion Properties of (FeCoNi)86Al7Ti7 High-Entropy Alloys
by Qinhu Sun, Pan Ma, Hong Yang, Kaiqiang Xie, Shiguang Wan, Chunqi Sheng, Zhibo Chen, Hongji Yang, Yandong Jia and Konda Gokuldoss Prashanth
Entropy 2025, 27(7), 747; https://doi.org/10.3390/e27070747 - 12 Jul 2025
Viewed by 219
Abstract
Optimization of grain size distribution in high-entropy alloys (HEAs) is a promising design strategy to overcome wear and corrosion resistance. In this study, a (FeCoNi)86Al7Ti7 high-entropy alloy with customized isometric and heterogeneous structure, as well as fine-crystal isometric [...] Read more.
Optimization of grain size distribution in high-entropy alloys (HEAs) is a promising design strategy to overcome wear and corrosion resistance. In this study, a (FeCoNi)86Al7Ti7 high-entropy alloy with customized isometric and heterogeneous structure, as well as fine-crystal isometric design by SPS, is investigated for microstructure, surface morphology, hardness, frictional wear, and corrosion resistance. The effects of the SPS process on the microstructure and mechanical behavior are elucidated, and the frictional wear and corrosion resistance of the alloys are improved with heterogeneous structural fine-grain strengthening and uniform fine-grain strengthening. The wear mechanisms and corrosion behavior mechanisms of (FeCoNi)86Al7Ti7 HEAs with different phase structure designs are elaborated. This work highlights the potential of using powder metallurgy to efficiently and precisely control and optimize the multi-scale microstructure of high-entropy alloys, thereby improving their frictional wear and corrosion properties in demanding applications. Full article
(This article belongs to the Special Issue Recent Advances in High Entropy Alloys)
Show Figures

Figure 1

23 pages, 3614 KiB  
Article
A Multimodal Semantic-Enhanced Attention Network for Fake News Detection
by Weijie Chen, Yuzhuo Dang and Xin Zhang
Entropy 2025, 27(7), 746; https://doi.org/10.3390/e27070746 - 12 Jul 2025
Viewed by 514
Abstract
The proliferation of social media platforms has triggered an unprecedented increase in multimodal fake news, creating pressing challenges for content authenticity verification. Current fake news detection systems predominantly rely on isolated unimodal analysis (text or image), failing to exploit critical cross-modal correlations or [...] Read more.
The proliferation of social media platforms has triggered an unprecedented increase in multimodal fake news, creating pressing challenges for content authenticity verification. Current fake news detection systems predominantly rely on isolated unimodal analysis (text or image), failing to exploit critical cross-modal correlations or leverage latent social context cues. To bridge this gap, we introduce the SCCN (Semantic-enhanced Cross-modal Co-attention Network), a novel framework that synergistically combines multimodal features with refined social graph signals. Our approach innovatively combines text, image, and social relation features through a hierarchical fusion framework. First, we extract modality-specific features and enhance semantics by identifying entities in both text and visual data. Second, an improved co-attention mechanism selectively integrates social relations while removing irrelevant connections to reduce noise and exploring latent informative links. Finally, the model is optimized via cross-entropy loss with entropy minimization. Experimental results for benchmark datasets (PHEME and Weibo) show that SCCN consistently outperforms existing approaches, achieving relative accuracy enhancements of 1.7% and 1.6% over the best-performing baseline methods in each dataset. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

32 pages, 735 KiB  
Article
Dynamic Balance: A Thermodynamic Principle for the Emergence of the Golden Ratio in Open Non-Equilibrium Steady States
by Alejandro Ruiz
Entropy 2025, 27(7), 745; https://doi.org/10.3390/e27070745 - 11 Jul 2025
Viewed by 468
Abstract
We develop a symmetry-based variational theory that shows the coarse-grained balance of work inflow to heat outflow in a driven, dissipative system relaxed to the golden ratio. Two order-2 Möbius transformations—a self-dual flip and a self-similar shift—generate a discrete non-abelian subgroup of [...] Read more.
We develop a symmetry-based variational theory that shows the coarse-grained balance of work inflow to heat outflow in a driven, dissipative system relaxed to the golden ratio. Two order-2 Möbius transformations—a self-dual flip and a self-similar shift—generate a discrete non-abelian subgroup of PGL(2,Q(5)). Requiring any smooth, strictly convex Lyapunov functional to be invariant under both maps enforces a single non-equilibrium fixed point: the golden mean. We confirm this result by (i) a gradient-flow partial-differential equation, (ii) a birth–death Markov chain whose continuum limit is Fokker–Planck, (iii) a Martin–Siggia–Rose field theory, and (iv) exact Ward identities that protect the fixed point against noise. Microscopic kinetics merely set the approach rate; three parameter-free invariants emerge: a 62%:38% split between entropy production and useful power, an RG-invariant diffusion coefficient linking relaxation time and correlation length Dα=ξz/τ, and a ϑ=45 eigen-angle that maps to the golden logarithmic spiral. The same dual symmetry underlies scaling laws in rotating turbulence, plant phyllotaxis, cortical avalanches, quantum critical metals, and even de-Sitter cosmology, providing a falsifiable, unifying principle for pattern formation far from equilibrium. Full article
(This article belongs to the Section Entropy and Biology)
Show Figures

Figure 1

36 pages, 3682 KiB  
Article
Enhancing s-CO2 Brayton Power Cycle Efficiency in Cold Ambient Conditions Through Working Fluid Blends
by Paul Tafur-Escanta, Luis Coco-Enríquez, Robert Valencia-Chapi and Javier Muñoz-Antón
Entropy 2025, 27(7), 744; https://doi.org/10.3390/e27070744 - 11 Jul 2025
Viewed by 218
Abstract
Supercritical carbon dioxide (s-CO2) Brayton cycles have emerged as a promising technology for high-efficiency power generation, owing to their compact architecture and favorable thermophysical properties. However, their performance degrades significantly under cold-climate conditions—such as those encountered in Greenland, Russia, Canada, Scandinavia, [...] Read more.
Supercritical carbon dioxide (s-CO2) Brayton cycles have emerged as a promising technology for high-efficiency power generation, owing to their compact architecture and favorable thermophysical properties. However, their performance degrades significantly under cold-climate conditions—such as those encountered in Greenland, Russia, Canada, Scandinavia, and Alaska—due to the proximity to the fluid’s critical point. This study investigates the behavior of the recompression Brayton cycle (RBC) under subzero ambient temperatures through the incorporation of low-critical-temperature additives to create CO2-based binary mixtures. The working fluids examined include methane (CH4), tetrafluoromethane (CF4), nitrogen trifluoride (NF3), and krypton (Kr). Simulation results show that CH4- and CF4-rich mixtures can achieve thermal efficiency improvements of up to 10 percentage points over pure CO2. NF3-containing blends yield solid performance in moderately cold environments, while Kr-based mixtures provide modest but consistent efficiency gains. At low compressor inlet temperatures, the high-temperature recuperator (HTR) becomes the dominant performance-limiting component. Optimal distribution of recuperator conductance (UA) favors increased HTR sizing when mixtures are employed, ensuring effective heat recovery across larger temperature differentials. The study concludes with a comparative exergy analysis between pure CO2 and mixture-based cycles in RBC architecture. The findings highlight the potential of custom-tailored working fluids to enhance thermodynamic performance and operational stability of s-CO2 power systems under cold-climate conditions. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

21 pages, 1362 KiB  
Article
Decentralized Consensus Protocols on SO(4)N and TSO(4)N with Reshaping
by Eric A. Butcher and Vianella Spaeth
Entropy 2025, 27(7), 743; https://doi.org/10.3390/e27070743 - 11 Jul 2025
Viewed by 309
Abstract
Consensus protocols for a multi-agent networked system consist of strategies that align the states of all agents that share information according to a given network topology, despite challenges such as communication limitations, time-varying networks, and communication delays. The special orthogonal group [...] Read more.
Consensus protocols for a multi-agent networked system consist of strategies that align the states of all agents that share information according to a given network topology, despite challenges such as communication limitations, time-varying networks, and communication delays. The special orthogonal group SO(n) plays a key role in applications from rigid body attitude synchronization to machine learning on Lie groups, particularly in fields like physics-informed learning and geometric deep learning. In this paper, N-agent consensus protocols are proposed on the Lie group SO(4) and the corresponding tangent bundle TSO(4), in which the state spaces are SO(4)N and TSO(4)N, respectively. In particular, when using communication topologies such as a ring graph for which the local stability of non-consensus equilibria is retained in the closed loop, a consensus protocol that leverages a reshaping strategy is proposed to destabilize non-consensus equilibria and produce consensus with almost global stability on SO(4)N or TSO(4)N. Lyapunov-based stability guarantees are obtained, and simulations are conducted to illustrate the advantages of these proposed consensus protocols. Full article
(This article belongs to the Special Issue Lie Group Machine Learning)
Show Figures

Figure 1

26 pages, 4823 KiB  
Article
Robust Fractional Low Order Adaptive Linear Chirplet Transform and Its Application to Fault Analysis
by Junbo Long, Changshou Deng, Haibin Wang and Youxue Zhou
Entropy 2025, 27(7), 742; https://doi.org/10.3390/e27070742 - 11 Jul 2025
Viewed by 237
Abstract
Time-frequency analysis (TFA) technology is an important tool for analyzing non-Gaussian mechanical fault vibration signals. In the complex background of infinite variance process noise and Gaussian colored noise, it is difficult for traditional methods to obtain the highly concentrated time-frequency representation (TFR) of [...] Read more.
Time-frequency analysis (TFA) technology is an important tool for analyzing non-Gaussian mechanical fault vibration signals. In the complex background of infinite variance process noise and Gaussian colored noise, it is difficult for traditional methods to obtain the highly concentrated time-frequency representation (TFR) of fault vibration signals. Based on the insensitive property of fractional low-order statistics for infinite variance and Gaussian processes, robust fractional lower order adaptive linear chirplet transform (FLOACT) and fractional lower order adaptive scaling chirplet transform (FLOASCT) methods are proposed to suppress the mixed complex noise in this paper. The calculation steps and processes of the algorithms are summarized and deduced in detail. The experimental simulation results show that the improved FLOACT and FLOASCT methods have good effects on multi-component signals with short frequency intervals in the time-frequency domain and even cross-frequency trajectories in the strong impulse background noise environment. Finally, the proposed methods are applied to the feature analysis and extraction of the mechanical outer race fault vibration signals in complex background environments, and the results show that they have good estimation accuracy and effectiveness in lower MSNR, which indicate their robustness and adaptability. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop