Previous Issue
Volume 27, June
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 27, Issue 7 (July 2025) – 111 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 460 KiB  
Article
Post-Quantum Secure Multi-Factor Authentication Protocol for Multi-Server Architecture
by Yunhua Wen, Yandong Su and Wei Li
Entropy 2025, 27(7), 765; https://doi.org/10.3390/e27070765 - 18 Jul 2025
Abstract
The multi-factor authentication (MFA) protocol requires users to provide a combination of a password, a smart card and biometric data as verification factors to gain access to the services they need. In a single-server MFA system, users accessing multiple distinct servers must register [...] Read more.
The multi-factor authentication (MFA) protocol requires users to provide a combination of a password, a smart card and biometric data as verification factors to gain access to the services they need. In a single-server MFA system, users accessing multiple distinct servers must register separately for each server, manage multiple smart cards, and remember numerous passwords. In contrast, an MFA system designed for multi-server architecture allows users to register once at a registration center (RC) and then access all associated servers with a single smart card and one password. MFA with an offline RC addresses the computational bottleneck and single-point failure issues associated with the RC. In this paper, we propose a post-quantum secure MFA protocol for a multi-server architecture with an offline RC. Our MFA protocol utilizes the post-quantum secure Kyber key encapsulation mechanism and an information-theoretically secure fuzzy extractor as its building blocks. We formally prove the post-quantum semantic security of our MFA protocol under the real or random (ROR) model in the random oracle paradigm. Compared to related protocols, our protocol achieves higher efficiency and maintains reasonable communication overhead. Full article
20 pages, 7358 KiB  
Article
Comparative Analysis of Robust Entanglement Generation in Engineered XX Spin Chains
by Eduardo K. Soares, Gentil D. de Moraes Neto and Fabiano M. Andrade
Entropy 2025, 27(7), 764; https://doi.org/10.3390/e27070764 - 18 Jul 2025
Abstract
We present a numerical investigation comparing two entanglement generation protocols in finite XX spin chains with varying spin magnitudes (s=1/2,1,3/2). Protocol 1 (P1) relies on staggered couplings to steer correlations toward [...] Read more.
We present a numerical investigation comparing two entanglement generation protocols in finite XX spin chains with varying spin magnitudes (s=1/2,1,3/2). Protocol 1 (P1) relies on staggered couplings to steer correlations toward the ends of the chain. At the same time, Protocol 2 (P2) adopts a dual-port architecture that uses optimized boundary fields to mediate virtual excitations between terminal spins. Our results show that P2 consistently outperforms P1 in all spin values, generating higher-fidelity entanglement in shorter timescales when evaluated under the same system parameters. Furthermore, P2 exhibits superior robustness under realistic imperfections, including diagonal and off-diagonal disorder, as well as dephasing noise. To further assess the resilience of both protocols in experimentally relevant settings, we employ the pseudomode formalism to characterize the impact of non-Markovian noise on the entanglement dynamics. Our analysis reveals that the dual-port mechanism (P2) remains effective even when memory effects are present, as it reduces the excitation of bulk modes that would otherwise enhance environment-induced backflow. Together, the scalability, efficiency, and noise resilience of the dual-port approach position it as a promising framework for entanglement distribution in solid-state quantum information platforms. Full article
(This article belongs to the Special Issue Entanglement in Quantum Spin Systems)
20 pages, 3787 KiB  
Article
Enhancing Robustness of Variational Data Assimilation in Chaotic Systems: An α-4DVar Framework with Rényi Entropy and α-Generalized Gaussian Distributions
by Yuchen Luo, Xiaoqun Cao, Kecheng Peng, Mengge Zhou and Yanan Guo
Entropy 2025, 27(7), 763; https://doi.org/10.3390/e27070763 - 18 Jul 2025
Abstract
Traditional 4-dimensional variational data assimilation methods have limitations due to the Gaussian distribution assumption of observation errors, and the gradient of the objective functional is vulnerable to observation noise and outliers. To address these issues, this paper proposes a non-Gaussian nonlinear data assimilation [...] Read more.
Traditional 4-dimensional variational data assimilation methods have limitations due to the Gaussian distribution assumption of observation errors, and the gradient of the objective functional is vulnerable to observation noise and outliers. To address these issues, this paper proposes a non-Gaussian nonlinear data assimilation method called α-4DVar, based on Rényi entropy and the α-generalized Gaussian distribution. By incorporating the heavy-tailed property of Rényi entropy, the objective function and its gradient suitable for non-Gaussian errors are derived, and numerical experiments are conducted using the Lorenz-63 model. Experiments are conducted with Gaussian and non-Gaussian errors as well as different initial guesses to compare the assimilation effects of traditional 4DVar and α-4DVar. The results show that α-4DVar performs as well as traditional method without observational errors. Its analysis field is closer to the truth, with RMSE rapidly dropping to a low level and remaining stable, particularly under non-Gaussian errors. Under different initial guesses, the RMSE of both the background and analysis fields decreases quickly and stabilizes. In conclusion, the α-4DVar method demonstrates significant advantages in handling non-Gaussian observational errors, robustness against noise, and adaptability to various observational conditions, thus offering a more reliable and effective solution for data assimilation. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

18 pages, 821 KiB  
Article
Joint Iterative Decoding Design of Cooperative Downlink SCMA Systems
by Hao Cheng, Min Zhang and Ruoyu Su
Entropy 2025, 27(7), 762; https://doi.org/10.3390/e27070762 - 18 Jul 2025
Abstract
Sparse code multiple access (SCMA) has been a competitive multiple access candidate for future communication networks due to its superiority in spectrum efficiency and providing massive connectivity. However, cell edge users may suffer from great performance degradations due to signal attenuation. Therefore, a [...] Read more.
Sparse code multiple access (SCMA) has been a competitive multiple access candidate for future communication networks due to its superiority in spectrum efficiency and providing massive connectivity. However, cell edge users may suffer from great performance degradations due to signal attenuation. Therefore, a cooperative downlink SCMA system is proposed to improve transmission reliability. To the best of our knowledge, multiuser detection is still an open issue for this cooperative downlink SCMA system. To this end, we propose a joint iterative decoding design of the cooperative downlink SCMA system by using the joint factor graph stemming from direct and relay transmission. The closed form bit-error rate (BER) performance of the cooperative downlink SCMA system is also derived. Simulation results verify that the proposed cooperative downlink SCMA system performs better than the non-cooperative one. Full article
(This article belongs to the Special Issue Wireless Communications: Signal Processing Perspectives, 2nd Edition)
Show Figures

Figure 1

14 pages, 3176 KiB  
Article
Impact of Data Distribution and Bootstrap Setting on Anomaly Detection Using Isolation Forest in Process Quality Control
by Hyunyul Choi and Kihyo Jung
Entropy 2025, 27(7), 761; https://doi.org/10.3390/e27070761 - 18 Jul 2025
Abstract
This study investigates the impact of data distribution and bootstrap resampling on the anomaly detection performance of the Isolation Forest (iForest) algorithm in statistical process control. Although iForest has received attention for its multivariate and ensemble-based nature, its performance under non-normal data distributions [...] Read more.
This study investigates the impact of data distribution and bootstrap resampling on the anomaly detection performance of the Isolation Forest (iForest) algorithm in statistical process control. Although iForest has received attention for its multivariate and ensemble-based nature, its performance under non-normal data distributions and varying bootstrap settings remains underexplored. To address this gap, a comprehensive simulation was performed across 18 scenarios involving log-normal, gamma, and t-distributions with different mean shift levels and bootstrap configurations. The results show that iForest substantially outperforms the conventional Hotelling’s T2 control chart, especially in non-Gaussian settings and under small-to-medium process shifts. Enabling bootstrap resampling led to marginal improvements across classification metrics, including accuracy, precision, recall, F1-score, and average run length (ARL)1. However, a key limitation of iForest was its reduced sensitivity to subtle process changes, such as a 1σ mean shift, highlighting an area for future enhancement. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

18 pages, 1438 KiB  
Article
Maximum Entropy Estimates of Hubble Constant from Planck Measurements
by David P. Knobles and Mark F. Westling
Entropy 2025, 27(7), 760; https://doi.org/10.3390/e27070760 - 16 Jul 2025
Viewed by 77
Abstract
A maximum entropy (ME) methodology was used to infer the Hubble constant from the temperature anisotropies in cosmic microwave background (CMB) measurements, as measured by the Planck satellite. A simple cosmological model provided physical insight and afforded robust statistical sampling of a parameter [...] Read more.
A maximum entropy (ME) methodology was used to infer the Hubble constant from the temperature anisotropies in cosmic microwave background (CMB) measurements, as measured by the Planck satellite. A simple cosmological model provided physical insight and afforded robust statistical sampling of a parameter space. The parameter space included the spectral tilt and amplitude of adiabatic density fluctuations of the early universe and the present-day ratios of dark energy, matter, and baryonic matter density. A statistical temperature was estimated by applying the equipartition theorem, which uniquely specifies a posterior probability distribution. The ME analysis inferred the mean value of the Hubble constant to be about 67 km/sec/Mpc with a conservative standard deviation of approximately 4.4 km/sec/Mpc. Unlike standard Bayesian analyses that incorporate specific noise models, the ME approach treats the model error generically, thereby producing broader, but less assumption-dependent, uncertainty bounds. The inferred ME value lies within 1σ of both early-universe estimates (Planck, Dark Energy Signal Instrument (DESI)) and late-universe measurements (e.g., the Chicago Carnegie Hubble Program (CCHP)) using redshift data collected from the James Webb Space Telescope (JWST). Thus, the ME analysis does not appear to support the existence of the Hubble tension. Full article
(This article belongs to the Special Issue Insight into Entropy)
Show Figures

Figure 1

20 pages, 2382 KiB  
Article
Heterogeneity-Aware Personalized Federated Neural Architecture Search
by An Yang and Ying Liu
Entropy 2025, 27(7), 759; https://doi.org/10.3390/e27070759 - 16 Jul 2025
Viewed by 107
Abstract
Federated learning (FL), which enables collaborative learning across distributed nodes, confronts a significant heterogeneity challenge, primarily including resource heterogeneity induced by different hardware platforms, and statistical heterogeneity originating from non-IID private data distributions among clients. Neural architecture search (NAS), particularly one-shot NAS, holds [...] Read more.
Federated learning (FL), which enables collaborative learning across distributed nodes, confronts a significant heterogeneity challenge, primarily including resource heterogeneity induced by different hardware platforms, and statistical heterogeneity originating from non-IID private data distributions among clients. Neural architecture search (NAS), particularly one-shot NAS, holds great promise for automatically designing optimal personalized models tailored to such heterogeneous scenarios. However, the coexistence of both resource and statistical heterogeneity destabilizes the training of the one-shot supernet, impairs the evaluation of candidate architectures, and ultimately hinders the discovery of optimal personalized models. To address this problem, we propose a heterogeneity-aware personalized federated NAS (HAPFNAS) method. First, we leverage lightweight knowledge models to distill knowledge from clients to server-side supernet, thereby effectively mitigating the effects of heterogeneity and enhancing the training stability. Then, we build random-forest-based personalized performance predictors to enable the efficient evaluation of candidate architectures across clients. Furthermore, we develop a model-heterogeneous FL algorithm called heteroFedAvg to facilitate collaborative model training for the discovered personalized models. Comprehensive experiments on CIFAR-10/100 and Tiny-ImageNet classification datasets demonstrate the effectiveness of our HAPFNAS, compared to state-of-the-art federated NAS methods. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

16 pages, 862 KiB  
Article
Random Search Walks Inside Absorbing Annuli
by Anderson S. Bibiano-Filho, Jandson F. O. de Freitas, Marcos G. E. da Luz, Gandhimohan M. Viswanathan and Ernesto P. Raposo
Entropy 2025, 27(7), 758; https://doi.org/10.3390/e27070758 - 15 Jul 2025
Viewed by 84
Abstract
We revisit the problem of random search walks in the two-dimensional (2D) space between concentric absorbing annuli, in which a searcher performs random steps until finding either the inner or the outer ring. By considering step lengths drawn from a power-law distribution, we [...] Read more.
We revisit the problem of random search walks in the two-dimensional (2D) space between concentric absorbing annuli, in which a searcher performs random steps until finding either the inner or the outer ring. By considering step lengths drawn from a power-law distribution, we obtain the exact analytical result for the search efficiency η in the ballistic limit, as well as an approximate expression for η in the regime of searches starting far away from both rings, and the scaling behavior of η for very small initial distances to the inner ring. Our numerical results show good overall agreement with the theoretical findings. We also analyze numerically the absorbing probabilities related to the encounter of the inner and outer rings and the associated Shannon entropy. The power-law exponent marking the crossing of such probabilities (equiprobability) and the maximum entropy condition grows logarithmically with the starting distance. Random search walks inside absorbing annuli are relevant, since they represent a mean-field approach to conventional random searches in 2D, which is still an open problem with important applications in various fields. Full article
(This article belongs to the Special Issue Transport in Complex Environments)
Show Figures

Figure 1

33 pages, 1024 KiB  
Article
Graph-Theoretic Limits of Distributed Computation: Entropy, Eigenvalues, and Chromatic Numbers
by Mohammad Reza Deylam Salehi and Derya Malak
Entropy 2025, 27(7), 757; https://doi.org/10.3390/e27070757 - 15 Jul 2025
Viewed by 116
Abstract
We address the problem of the distributed computation of arbitrary functions of two correlated sources, X1 and X2, residing in two distributed source nodes, respectively. We exploit the structure of a computation task by coding source characteristic graphs (and multiple [...] Read more.
We address the problem of the distributed computation of arbitrary functions of two correlated sources, X1 and X2, residing in two distributed source nodes, respectively. We exploit the structure of a computation task by coding source characteristic graphs (and multiple instances using the n-fold OR product of this graph with itself). For regular graphs and general graphs, we establish bounds on the optimal rate—characterized by the chromatic entropy for the n-fold graph products—that allows a receiver for asymptotically lossless computation of arbitrary functions over finite fields. For the special class of cycle graphs (i.e., 2-regular graphs), we establish an exact characterization of chromatic numbers and derive bounds on the required rates. Next, focusing on the more general class of d-regular graphs, we establish connections between d-regular graphs and expansion rates for n-fold graph products using graph spectra. Finally, for general graphs, we leverage the Gershgorin Circle Theorem (GCT) to provide a characterization of the spectra, which allows us to derive new bounds on the optimal rate. Our codes leverage the spectra of the computation and provide a graph expansion-based characterization to succinctly capture the computation structure, providing new insights into the problem of distributed computation of arbitrary functions. Full article
(This article belongs to the Special Issue Information Theory and Data Compression)
Show Figures

Figure 1

41 pages, 1006 KiB  
Article
A Max-Flow Approach to Random Tensor Networks
by Khurshed Fitter, Faedi Loulidi and Ion Nechita
Entropy 2025, 27(7), 756; https://doi.org/10.3390/e27070756 - 15 Jul 2025
Viewed by 60
Abstract
The entanglement entropy of a random tensor network (RTN) is studied using tools from free probability theory. Random tensor networks are simple toy models that help in understanding the entanglement behavior of a boundary region in the anti-de Sitter/conformal field theory (AdS/CFT) context. [...] Read more.
The entanglement entropy of a random tensor network (RTN) is studied using tools from free probability theory. Random tensor networks are simple toy models that help in understanding the entanglement behavior of a boundary region in the anti-de Sitter/conformal field theory (AdS/CFT) context. These can be regarded as specific probabilistic models for tensors with particular geometry dictated by a graph (or network) structure. First, we introduce a model of RTN obtained by contracting maximally entangled states (corresponding to the edges of the graph) on the tensor product of Gaussian tensors (corresponding to the vertices of the graph). The entanglement spectrum of the resulting random state is analyzed along a given bipartition of the local Hilbert spaces. The limiting eigenvalue distribution of the reduced density operator of the RTN state is provided in the limit of large local dimension. This limiting value is described through a maximum flow optimization problem in a new graph corresponding to the geometry of the RTN and the given bipartition. In the case of series-parallel graphs, an explicit formula for the limiting eigenvalue distribution is provided using classical and free multiplicative convolutions. The physical implications of these results are discussed, allowing the analysis to move beyond the semiclassical regime without any cut assumption, specifically in terms of finite corrections to the average entanglement entropy of the RTN. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

31 pages, 2957 KiB  
Article
Nash Equilibria in Four-Strategy Quantum Extensions of the Prisoner’s Dilemma Game
by Piotr Frąckiewicz, Anna Gorczyca-Goraj, Krzysztof Grzanka, Katarzyna Nowakowska and Marek Szopa
Entropy 2025, 27(7), 755; https://doi.org/10.3390/e27070755 - 15 Jul 2025
Viewed by 92
Abstract
The concept of Nash equilibria in pure strategies for quantum extensions of the general form of the Prisoner’s Dilemma game is investigated. The process of quantization involves incorporating two additional unitary strategies, which effectively expand the classical game. We consider five classes of [...] Read more.
The concept of Nash equilibria in pure strategies for quantum extensions of the general form of the Prisoner’s Dilemma game is investigated. The process of quantization involves incorporating two additional unitary strategies, which effectively expand the classical game. We consider five classes of such quantum games, which remain invariant under isomorphic transformations of the classical game. The resulting Nash equilibria are found to be more closely aligned with Pareto-optimal solutions than those of the conventional Nash equilibrium outcome of the classical game. Our results demonstrate the complexity and diversity of strategic behavior in the quantum setting, providing new insights into the dynamics of classical decision-making dilemmas. In particular, we provide a detailed characterization of strategy profiles and their corresponding Nash equilibria, thereby extending the understanding of quantum strategies’ impact on traditional game-theoretical problems. Full article
Show Figures

Figure 1

21 pages, 7084 KiB  
Article
Chinese Paper-Cutting Style Transfer via Vision Transformer
by Chao Wu, Yao Ren, Yuying Zhou, Ming Lou and Qing Zhang
Entropy 2025, 27(7), 754; https://doi.org/10.3390/e27070754 - 15 Jul 2025
Viewed by 142
Abstract
Style transfer technology has seen substantial attention in image synthesis, notably in applications like oil painting, digital printing, and Chinese landscape painting. However, it is often difficult to generate migrated images that retain the essence of paper-cutting art and have strong visual appeal [...] Read more.
Style transfer technology has seen substantial attention in image synthesis, notably in applications like oil painting, digital printing, and Chinese landscape painting. However, it is often difficult to generate migrated images that retain the essence of paper-cutting art and have strong visual appeal when trying to apply the unique style of Chinese paper-cutting art to style transfer. Therefore, this paper proposes a new method for Chinese paper-cutting style transformation based on the Transformer, aiming at realizing the efficient transformation of Chinese paper-cutting art styles. Specifically, the network consists of a frequency-domain mixture block and a multi-level feature contrastive learning module. The frequency-domain mixture block explores spatial and frequency-domain interaction information, integrates multiple attention windows along with frequency-domain features, preserves critical details, and enhances the effectiveness of style conversion. To further embody the symmetrical structures and hollowed hierarchical patterns intrinsic to Chinese paper-cutting, the multi-level feature contrastive learning module is designed based on a contrastive learning strategy. This module maximizes mutual information between multi-level transferred features and content features, improves the consistency of representations across different layers, and thus accentuates the unique symmetrical aesthetics and artistic expression of paper-cutting. Extensive experimental results demonstrate that the proposed method outperforms existing state-of-the-art approaches in both qualitative and quantitative evaluations. Additionally, we created a Chinese paper-cutting dataset that, although modest in size, represents an important initial step towards enriching existing resources. This dataset provides valuable training data and a reference benchmark for future research in this field. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

21 pages, 877 KiB  
Article
Identity-Based Provable Data Possession with Designated Verifier from Lattices for Cloud Computing
by Mengdi Zhao and Huiyan Chen
Entropy 2025, 27(7), 753; https://doi.org/10.3390/e27070753 - 15 Jul 2025
Viewed by 79
Abstract
Provable data possession (PDP) is a technique that enables the verification of data integrity in cloud storage without the need to download the data. PDP schemes are generally categorized into public and private verification. Public verification allows third parties to assess the integrity [...] Read more.
Provable data possession (PDP) is a technique that enables the verification of data integrity in cloud storage without the need to download the data. PDP schemes are generally categorized into public and private verification. Public verification allows third parties to assess the integrity of outsourced data, offering good openness and flexibility, but it may lead to privacy leakage and security risks. In contrast, private verification restricts the auditing capability to the data owner, providing better privacy protection but often resulting in higher verification costs and operational complexity due to limited local resources. Moreover, most existing PDP schemes are based on classical number-theoretic assumptions, making them vulnerable to quantum attacks. To address these challenges, this paper proposes an identity-based PDP with a designated verifier over lattices, utilizing a specially leveled identity-based fully homomorphic signature (IB-FHS) scheme. We provide a formal security proof of the proposed scheme under the small-integer solution (SIS) and learning with errors (LWE) within the random oracle model. Theoretical analysis confirms that the scheme achieves security guarantees while maintaining practical feasibility. Furthermore, simulation-based experiments show that for a 1 MB file and lattice dimension of n = 128, the computation times for core algorithms such as TagGen, GenProof, and CheckProof are approximately 20.76 s, 13.75 s, and 3.33 s, respectively. Compared to existing lattice-based PDP schemes, the proposed scheme introduces additional overhead due to the designated verifier mechanism; however, it achieves a well-balanced optimization among functionality, security, and efficiency. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

17 pages, 583 KiB  
Article
Cross-Domain Feature Enhancement-Based Password Guessing Method for Small Samples
by Cheng Liu, Junrong Li, Xiheng Liu, Bo Li, Mengsu Hou, Wei Yu, Yujun Li and Wenjun Liu
Entropy 2025, 27(7), 752; https://doi.org/10.3390/e27070752 - 15 Jul 2025
Viewed by 89
Abstract
As a crucial component of account protection system evaluation and intrusion detection, the advancement of password guessing technology encounters challenges due to its reliance on password data. In password guessing research, there is a conflict between the traditional models’ need for large training [...] Read more.
As a crucial component of account protection system evaluation and intrusion detection, the advancement of password guessing technology encounters challenges due to its reliance on password data. In password guessing research, there is a conflict between the traditional models’ need for large training samples and the limitations on accessing password data imposed by privacy protection regulations. Consequently, security researchers often struggle with the issue of having a very limited password set from which to guess. This paper introduces a small-sample password guessing technique that enhances cross-domain features. It analyzes the password set using probabilistic context-free grammar (PCFG) to create a list of password structure probabilities and a dictionary of password fragment probabilities, which are then used to generate a password set structure vector. The method calculates the cosine similarity between the small-sample password set B from the target area and publicly leaked password sets Ai using the structure vector, identifying the set Amax with the highest similarity. This set is then utilized as a training set, where the features of the small-sample password set are enhanced by modifying the structure vectors of the training set. The enhanced training set is subsequently employed for PCFG password generation. The paper uses hit rate as the evaluation metric, and Experiment I reveals that the similarity between B and Ai can be reliably measured when the size of B exceeds 150. Experiment II confirms the hypothesis that a higher similarity between Ai and B leads to a greater hit rate of Ai on the test set of B, with potential improvements of up to 32% compared to training with B alone. Experiment III demonstrates that after enhancing the features of Amax, the hit rate for the small-sample password set can increase by as much as 10.52% compared to previous results. This method offers a viable solution for small-sample password guessing without requiring prior knowledge. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

17 pages, 300 KiB  
Article
Commitment Schemes from OWFs with Applications to Quantum Oblivious Transfer
by Thomas Lorünser, Sebastian Ramacher and Federico Valbusa
Entropy 2025, 27(7), 751; https://doi.org/10.3390/e27070751 - 15 Jul 2025
Viewed by 117
Abstract
Commitment schemes (CSs) are essential to many cryptographic protocols and schemes with applications that include privacy-preserving computation on data, privacy-preserving authentication, and, in particular, oblivious transfer protocols. For quantum oblivious transfer (qOT) protocols, unconditionally binding commitment schemes that do not rely on hardness [...] Read more.
Commitment schemes (CSs) are essential to many cryptographic protocols and schemes with applications that include privacy-preserving computation on data, privacy-preserving authentication, and, in particular, oblivious transfer protocols. For quantum oblivious transfer (qOT) protocols, unconditionally binding commitment schemes that do not rely on hardness assumptions from structured mathematical problems are required. These additional constraints severely limit the choice of commitment schemes to random oracle-based constructions or Naor’s bit commitment scheme. As these protocols commit to individual bits, the use of such commitment schemes comes at a high bandwidth and computational cost. In this work, we investigate improvements to the efficiency of commitment schemes used in qOT protocols and propose an extension of Naor’s commitment scheme requiring the existence of one-way functions (OWFs) to reduce communication complexity for 2-bit strings. Additionally, we provide an interactive string commitment scheme with preprocessing to enable the fast and efficient computation of commitments. Full article
(This article belongs to the Special Issue Information-Theoretic Cryptography and Security)
Show Figures

Figure 1

129 pages, 6810 KiB  
Review
Statistical Mechanics of Linear k-mer Lattice Gases: From Theory to Applications
by Julian Jose Riccardo, Pedro Marcelo Pasinetti, Jose Luis Riccardo and Antonio Jose Ramirez-Pastor
Entropy 2025, 27(7), 750; https://doi.org/10.3390/e27070750 - 14 Jul 2025
Viewed by 85
Abstract
The statistical mechanics of structured particles with arbitrary size and shape adsorbed onto discrete lattices presents a longstanding theoretical challenge, mainly due to complex spatial correlations and entropic effects that emerge at finite densities. Even for simplified systems such as hard-core linear k [...] Read more.
The statistical mechanics of structured particles with arbitrary size and shape adsorbed onto discrete lattices presents a longstanding theoretical challenge, mainly due to complex spatial correlations and entropic effects that emerge at finite densities. Even for simplified systems such as hard-core linear k-mers, exact solutions remain limited to low-dimensional or highly constrained cases. In this review, we summarize the main theoretical approaches developed by our research group over the past three decades to describe adsorption phenomena involving linear k-mers—also known as multisite occupancy adsorption—on regular lattices. We examine modern approximations such as an extension to two dimensions of the exact thermodynamic functions obtained in one dimension, the Fractional Statistical Theory of Adsorption based on Haldane’s fractional statistics, and the so-called Occupation Balance based on expansion of the reciprocal of the fugacity, and hybrid approaches such as the semi-empirical model obtained by combining exact one-dimensional calculations and the Guggenheim–DiMarzio approach. For interacting systems, statistical thermodynamics is explored within generalized Bragg–Williams and quasi-chemical frameworks. Particular focus is given to the recently proposed Multiple Exclusion statistics, which capture the correlated exclusion effects inherent to non-monomeric particles. Applications to monolayer and multilayer adsorption are analyzed, with relevance to hydrocarbon separation technologies. Finally, computational strategies, including advanced Monte Carlo techniques, are reviewed in the context of high-density regimes. This work provides a unified framework for understanding entropic and cooperative effects in lattice-adsorbed polyatomic systems and highlights promising directions for future theoretical and computational research. Full article
(This article belongs to the Special Issue Statistical Mechanics of Lattice Gases)
Show Figures

Figure 1

17 pages, 1348 KiB  
Article
A Revised Bimodal Generalized Extreme Value Distribution: Theory and Climate Data Application
by Cira E. G. Otiniano, Mathews N. S. Lisboa and Terezinha K. A. Ribeiro
Entropy 2025, 27(7), 749; https://doi.org/10.3390/e27070749 - 14 Jul 2025
Viewed by 84
Abstract
The bimodal generalized extreme value (BGEV) distribution was first introduced in 2023. This distribution offers greater flexibility than the generalized extreme value (GEV) distribution for modeling extreme and heterogeneous (bimodal) events. However, applying this model requires a data-centering technique, as it lacks a [...] Read more.
The bimodal generalized extreme value (BGEV) distribution was first introduced in 2023. This distribution offers greater flexibility than the generalized extreme value (GEV) distribution for modeling extreme and heterogeneous (bimodal) events. However, applying this model requires a data-centering technique, as it lacks a location parameter. In this work, we investigate the properties of the BGEV distribution as redefined in 2024, which incorporates a location parameter, thereby enhancing its flexibility in practical applications. We derive explicit expressions for the probability density, the hazard rate, and the quantile function. Furthermore, we establish the identifiability property of this new class of BGEV distributions and compute expressions for the moments, the moment-generating function, and entropy. The applicability of the new model is illustrated using climate data. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

35 pages, 4030 KiB  
Article
An Exergy-Enhanced Improved IGDT-Based Optimal Scheduling Model for Electricity–Hydrogen Urban Integrated Energy Systems
by Min Xie, Lei Qing, Jia-Nan Ye and Yan-Xuan Lu
Entropy 2025, 27(7), 748; https://doi.org/10.3390/e27070748 - 13 Jul 2025
Viewed by 124
Abstract
Urban integrated energy systems (UIESs) play a critical role in facilitating low-carbon and high-efficiency energy transitions. However, existing scheduling strategies predominantly focus on energy quantity and cost, often neglecting the heterogeneity of energy quality across electricity, heat, gas, and hydrogen. This paper presents [...] Read more.
Urban integrated energy systems (UIESs) play a critical role in facilitating low-carbon and high-efficiency energy transitions. However, existing scheduling strategies predominantly focus on energy quantity and cost, often neglecting the heterogeneity of energy quality across electricity, heat, gas, and hydrogen. This paper presents an exergy-enhanced stochastic optimization framework for the optimal scheduling of electricity–hydrogen urban integrated energy systems (EHUIESs) under multiple uncertainties. By incorporating exergy efficiency evaluation into a Stochastic Optimization–Improved Information Gap Decision Theory (SOI-IGDT) framework, the model dynamically balances economic cost with thermodynamic performance. A penalty-based iterative mechanism is introduced to track exergy deviations and guide the system toward higher energy quality. The proposed approach accounts for uncertainties in renewable output, load variation, and Hydrogen-enriched compressed natural gas (HCNG) combustion. Case studies based on a 186-bus UIES coupled with a 20-node HCNG network show that the method improves exergy efficiency by up to 2.18% while maintaining cost robustness across varying confidence levels. These results underscore the significance of integrating exergy into real-time robust optimization for resilient and high-quality energy scheduling. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

13 pages, 5099 KiB  
Article
Effect of Grain Size Distribution on Frictional Wear and Corrosion Properties of (FeCoNi)86Al7Ti7 High-Entropy Alloys
by Qinhu Sun, Pan Ma, Hong Yang, Kaiqiang Xie, Shiguang Wan, Chunqi Sheng, Zhibo Chen, Hongji Yang, Yandong Jia and Konda Gokuldoss Prashanth
Entropy 2025, 27(7), 747; https://doi.org/10.3390/e27070747 - 12 Jul 2025
Viewed by 114
Abstract
Optimization of grain size distribution in high-entropy alloys (HEAs) is a promising design strategy to overcome wear and corrosion resistance. In this study, a (FeCoNi)86Al7Ti7 high-entropy alloy with customized isometric and heterogeneous structure, as well as fine-crystal isometric [...] Read more.
Optimization of grain size distribution in high-entropy alloys (HEAs) is a promising design strategy to overcome wear and corrosion resistance. In this study, a (FeCoNi)86Al7Ti7 high-entropy alloy with customized isometric and heterogeneous structure, as well as fine-crystal isometric design by SPS, is investigated for microstructure, surface morphology, hardness, frictional wear, and corrosion resistance. The effects of the SPS process on the microstructure and mechanical behavior are elucidated, and the frictional wear and corrosion resistance of the alloys are improved with heterogeneous structural fine-grain strengthening and uniform fine-grain strengthening. The wear mechanisms and corrosion behavior mechanisms of (FeCoNi)86Al7Ti7 HEAs with different phase structure designs are elaborated. This work highlights the potential of using powder metallurgy to efficiently and precisely control and optimize the multi-scale microstructure of high-entropy alloys, thereby improving their frictional wear and corrosion properties in demanding applications. Full article
(This article belongs to the Special Issue Recent Advances in High Entropy Alloys)
Show Figures

Figure 1

23 pages, 3614 KiB  
Article
A Multimodal Semantic-Enhanced Attention Network for Fake News Detection
by Weijie Chen, Yuzhuo Dang and Xin Zhang
Entropy 2025, 27(7), 746; https://doi.org/10.3390/e27070746 - 12 Jul 2025
Viewed by 310
Abstract
The proliferation of social media platforms has triggered an unprecedented increase in multimodal fake news, creating pressing challenges for content authenticity verification. Current fake news detection systems predominantly rely on isolated unimodal analysis (text or image), failing to exploit critical cross-modal correlations or [...] Read more.
The proliferation of social media platforms has triggered an unprecedented increase in multimodal fake news, creating pressing challenges for content authenticity verification. Current fake news detection systems predominantly rely on isolated unimodal analysis (text or image), failing to exploit critical cross-modal correlations or leverage latent social context cues. To bridge this gap, we introduce the SCCN (Semantic-enhanced Cross-modal Co-attention Network), a novel framework that synergistically combines multimodal features with refined social graph signals. Our approach innovatively combines text, image, and social relation features through a hierarchical fusion framework. First, we extract modality-specific features and enhance semantics by identifying entities in both text and visual data. Second, an improved co-attention mechanism selectively integrates social relations while removing irrelevant connections to reduce noise and exploring latent informative links. Finally, the model is optimized via cross-entropy loss with entropy minimization. Experimental results for benchmark datasets (PHEME and Weibo) show that SCCN consistently outperforms existing approaches, achieving relative accuracy enhancements of 1.7% and 1.6% over the best-performing baseline methods in each dataset. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

32 pages, 735 KiB  
Article
Dynamic Balance: A Thermodynamic Principle for the Emergence of the Golden Ratio in Open Non-Equilibrium Steady States
by Alejandro Ruiz
Entropy 2025, 27(7), 745; https://doi.org/10.3390/e27070745 - 11 Jul 2025
Viewed by 240
Abstract
We develop a symmetry-based variational theory that shows the coarse-grained balance of work inflow to heat outflow in a driven, dissipative system relaxed to the golden ratio. Two order-2 Möbius transformations—a self-dual flip and a self-similar shift—generate a discrete non-abelian subgroup of [...] Read more.
We develop a symmetry-based variational theory that shows the coarse-grained balance of work inflow to heat outflow in a driven, dissipative system relaxed to the golden ratio. Two order-2 Möbius transformations—a self-dual flip and a self-similar shift—generate a discrete non-abelian subgroup of PGL(2,Q(5)). Requiring any smooth, strictly convex Lyapunov functional to be invariant under both maps enforces a single non-equilibrium fixed point: the golden mean. We confirm this result by (i) a gradient-flow partial-differential equation, (ii) a birth–death Markov chain whose continuum limit is Fokker–Planck, (iii) a Martin–Siggia–Rose field theory, and (iv) exact Ward identities that protect the fixed point against noise. Microscopic kinetics merely set the approach rate; three parameter-free invariants emerge: a 62%:38% split between entropy production and useful power, an RG-invariant diffusion coefficient linking relaxation time and correlation length Dα=ξz/τ, and a ϑ=45 eigen-angle that maps to the golden logarithmic spiral. The same dual symmetry underlies scaling laws in rotating turbulence, plant phyllotaxis, cortical avalanches, quantum critical metals, and even de-Sitter cosmology, providing a falsifiable, unifying principle for pattern formation far from equilibrium. Full article
(This article belongs to the Section Entropy and Biology)
Show Figures

Figure 1

36 pages, 3682 KiB  
Article
Enhancing s-CO2 Brayton Power Cycle Efficiency in Cold Ambient Conditions Through Working Fluid Blends
by Paul Tafur-Escanta, Luis Coco-Enríquez, Robert Valencia-Chapi and Javier Muñoz-Antón
Entropy 2025, 27(7), 744; https://doi.org/10.3390/e27070744 - 11 Jul 2025
Viewed by 133
Abstract
Supercritical carbon dioxide (s-CO2) Brayton cycles have emerged as a promising technology for high-efficiency power generation, owing to their compact architecture and favorable thermophysical properties. However, their performance degrades significantly under cold-climate conditions—such as those encountered in Greenland, Russia, Canada, Scandinavia, [...] Read more.
Supercritical carbon dioxide (s-CO2) Brayton cycles have emerged as a promising technology for high-efficiency power generation, owing to their compact architecture and favorable thermophysical properties. However, their performance degrades significantly under cold-climate conditions—such as those encountered in Greenland, Russia, Canada, Scandinavia, and Alaska—due to the proximity to the fluid’s critical point. This study investigates the behavior of the recompression Brayton cycle (RBC) under subzero ambient temperatures through the incorporation of low-critical-temperature additives to create CO2-based binary mixtures. The working fluids examined include methane (CH4), tetrafluoromethane (CF4), nitrogen trifluoride (NF3), and krypton (Kr). Simulation results show that CH4- and CF4-rich mixtures can achieve thermal efficiency improvements of up to 10 percentage points over pure CO2. NF3-containing blends yield solid performance in moderately cold environments, while Kr-based mixtures provide modest but consistent efficiency gains. At low compressor inlet temperatures, the high-temperature recuperator (HTR) becomes the dominant performance-limiting component. Optimal distribution of recuperator conductance (UA) favors increased HTR sizing when mixtures are employed, ensuring effective heat recovery across larger temperature differentials. The study concludes with a comparative exergy analysis between pure CO2 and mixture-based cycles in RBC architecture. The findings highlight the potential of custom-tailored working fluids to enhance thermodynamic performance and operational stability of s-CO2 power systems under cold-climate conditions. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

21 pages, 1362 KiB  
Article
Decentralized Consensus Protocols on SO(4)N and TSO(4)N with Reshaping
by Eric A. Butcher and Vianella Spaeth
Entropy 2025, 27(7), 743; https://doi.org/10.3390/e27070743 - 11 Jul 2025
Viewed by 224
Abstract
Consensus protocols for a multi-agent networked system consist of strategies that align the states of all agents that share information according to a given network topology, despite challenges such as communication limitations, time-varying networks, and communication delays. The special orthogonal group [...] Read more.
Consensus protocols for a multi-agent networked system consist of strategies that align the states of all agents that share information according to a given network topology, despite challenges such as communication limitations, time-varying networks, and communication delays. The special orthogonal group SO(n) plays a key role in applications from rigid body attitude synchronization to machine learning on Lie groups, particularly in fields like physics-informed learning and geometric deep learning. In this paper, N-agent consensus protocols are proposed on the Lie group SO(4) and the corresponding tangent bundle TSO(4), in which the state spaces are SO(4)N and TSO(4)N, respectively. In particular, when using communication topologies such as a ring graph for which the local stability of non-consensus equilibria is retained in the closed loop, a consensus protocol that leverages a reshaping strategy is proposed to destabilize non-consensus equilibria and produce consensus with almost global stability on SO(4)N or TSO(4)N. Lyapunov-based stability guarantees are obtained, and simulations are conducted to illustrate the advantages of these proposed consensus protocols. Full article
(This article belongs to the Special Issue Lie Group Machine Learning)
Show Figures

Figure 1

26 pages, 4823 KiB  
Article
Robust Fractional Low Order Adaptive Linear Chirplet Transform and Its Application to Fault Analysis
by Junbo Long, Changshou Deng, Haibin Wang and Youxue Zhou
Entropy 2025, 27(7), 742; https://doi.org/10.3390/e27070742 - 11 Jul 2025
Viewed by 171
Abstract
Time-frequency analysis (TFA) technology is an important tool for analyzing non-Gaussian mechanical fault vibration signals. In the complex background of infinite variance process noise and Gaussian colored noise, it is difficult for traditional methods to obtain the highly concentrated time-frequency representation (TFR) of [...] Read more.
Time-frequency analysis (TFA) technology is an important tool for analyzing non-Gaussian mechanical fault vibration signals. In the complex background of infinite variance process noise and Gaussian colored noise, it is difficult for traditional methods to obtain the highly concentrated time-frequency representation (TFR) of fault vibration signals. Based on the insensitive property of fractional low-order statistics for infinite variance and Gaussian processes, robust fractional lower order adaptive linear chirplet transform (FLOACT) and fractional lower order adaptive scaling chirplet transform (FLOASCT) methods are proposed to suppress the mixed complex noise in this paper. The calculation steps and processes of the algorithms are summarized and deduced in detail. The experimental simulation results show that the improved FLOACT and FLOASCT methods have good effects on multi-component signals with short frequency intervals in the time-frequency domain and even cross-frequency trajectories in the strong impulse background noise environment. Finally, the proposed methods are applied to the feature analysis and extraction of the mechanical outer race fault vibration signals in complex background environments, and the results show that they have good estimation accuracy and effectiveness in lower MSNR, which indicate their robustness and adaptability. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

12 pages, 843 KiB  
Article
Thermalization in Asymmetric Harmonic Chains
by Weicheng Fu, Sihan Feng, Yong Zhang and Hong Zhao
Entropy 2025, 27(7), 741; https://doi.org/10.3390/e27070741 - 11 Jul 2025
Viewed by 198
Abstract
The symmetry of the interparticle interaction potential (IIP) plays a critical role in determining the thermodynamic and transport properties of solids. This study investigates the isolated effect of IIP asymmetry on thermalization. Asymmetry and nonlinearity are typically intertwined. To isolate the effect of [...] Read more.
The symmetry of the interparticle interaction potential (IIP) plays a critical role in determining the thermodynamic and transport properties of solids. This study investigates the isolated effect of IIP asymmetry on thermalization. Asymmetry and nonlinearity are typically intertwined. To isolate the effect of asymmetry, we introduce a one-dimensional asymmetric harmonic (AH) model whose IIP possesses asymmetry but no nonlinearity, evidenced by energy-independent vibrational frequencies. Extensive numerical simulations confirm a power-law relationship between thermalization time (Teq) and perturbation strength for the AH chain, revealing an exponent larger than the previously observed inverse-square law in the thermodynamic limit. Upon adding symmetric quartic nonlinearity into the AH model, we systematically study thermalization under combined asymmetry and nonlinearity. Matthiessen’s rule provides a good estimate of Teq in this case. Our results demonstrate that asymmetry plays a distinct role in enhancing higher-order effects and governing relaxation dynamics. Full article
Show Figures

Figure 1

14 pages, 2812 KiB  
Perspective
The Generation of Wind Velocity via Scale Invariant Gibbs Free Energy: Turbulence Drives the General Circulation
by Adrian F. Tuck
Entropy 2025, 27(7), 740; https://doi.org/10.3390/e27070740 - 10 Jul 2025
Viewed by 194
Abstract
The mechanism for the upscale deposition of energy into the atmosphere from molecules and photons up to organized wind systems is examined. This analysis rests on the statistical multifractal analysis of airborne observations. The results show that the persistence of molecular velocity after [...] Read more.
The mechanism for the upscale deposition of energy into the atmosphere from molecules and photons up to organized wind systems is examined. This analysis rests on the statistical multifractal analysis of airborne observations. The results show that the persistence of molecular velocity after collision in breaking the continuous translational symmetry of an equilibrated gas is causative. The symmetry breaking may be caused by excited photofragments with the associated persistence of molecular velocity after collision, interaction with condensed phase surfaces (solid or liquid), or, in a scaling environment, an adjacent scale having a different velocity and temperature. The relationship of these factors for the solution to the Navier–Stokes equation in an atmospheric context is considered. The scale invariant version of Gibbs free energy, carried by the most energetic molecules, enables the acceleration of organized flow (winds) from the smallest planetary scales by virtue of the nonlinearity of the mechanism, subject to dissipation by the more numerous average molecules maintaining an operational temperature via infrared radiation to the cold sink of space. The fastest moving molecules also affect the transfer of infrared radiation because their higher kinetic energy and the associated more-energetic collisions contribute more to the far wings of the spectral lines, where the collisional displacement from the central energy level gap is greatest and the lines are less self-absorbed. The relationship of events at these scales to macroscopic variables such as the thermal wind equation and its components will be considered in the Discussion section. An attempt is made to synthesize the mechanisms by which winds are generated and sustained, on all scales, by appealing to published works since 2003. This synthesis produces a view of the general circulation that includes thermodynamics and the defining role of turbulence in driving it. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

20 pages, 1765 KiB  
Article
Can Informativity Effects Be Predictability Effects in Disguise?
by Vsevolod Kapatsinski
Entropy 2025, 27(7), 739; https://doi.org/10.3390/e27070739 - 10 Jul 2025
Viewed by 416
Abstract
Recent work in corpus linguistics has observed that informativity predicts articulatory reduction of a linguistic unit above and beyond the unit’s predictability in the local context, i.e., the unit’s probability given the current context. Informativity of a unit is the inverse of average [...] Read more.
Recent work in corpus linguistics has observed that informativity predicts articulatory reduction of a linguistic unit above and beyond the unit’s predictability in the local context, i.e., the unit’s probability given the current context. Informativity of a unit is the inverse of average (log-scaled) predictability and corresponds to its information content. Research in the field has interpreted effects of informativity as speakers being sensitive to the information content of a unit in deciding how much effort to put into pronouncing it or as accumulation of memories of pronunciation details in long-term memory representations. However, average predictability can improve the estimate of local predictability of a unit above and beyond the observed predictability in that context, especially when that context is rare. Therefore, informativity can contribute to explaining variance in a dependent variable like reduction above and beyond local predictability simply because informativity improves the (inherently noisy) estimate of local predictability. This paper shows how to estimate the proportion of an observed informativity effect that is likely to be artifactual, due entirely to informativity improving the estimates of predictability, via simulation. The proposed simulation approach can be used to investigate whether an effect of informativity is likely to be real, under the assumption that corpus probabilities are an unbiased estimate of probabilities driving reduction behavior, and how much of it is likely to be due to noise in predictability estimates, in any real dataset. Full article
(This article belongs to the Special Issue Complexity Characteristics of Natural Language)
Show Figures

Figure 1

19 pages, 24556 KiB  
Article
Harmonic Aggregation Entropy: A Highly Discriminative Harmonic Feature Estimator for Time Series
by Ye Wang, Zhentao Yu, Cheng Chi, Bozhong Lei, Jianxin Pei and Dan Wang
Entropy 2025, 27(7), 738; https://doi.org/10.3390/e27070738 - 10 Jul 2025
Viewed by 142
Abstract
Harmonics are a common phenomenon widely present in power systems. The presence of harmonics not only increases the energy consumption of equipment but also poses hidden risks to the safety and stealth performance of large ships. Thus, there is an urgent need for [...] Read more.
Harmonics are a common phenomenon widely present in power systems. The presence of harmonics not only increases the energy consumption of equipment but also poses hidden risks to the safety and stealth performance of large ships. Thus, there is an urgent need for a detection method for the harmonic characteristics of time series. We propose a novel harmonic feature estimation method, termed Harmonic Aggregation Entropy (HaAgEn), which effectively discriminates against background noise. The method is based on bispectrum analysis; utilizing the distribution characteristics of harmonic signals in the bispectrum matrix, a new Diagonal Bi-directional Integral Bispectrum (DBIB) method is employed to effectively extract harmonic features within the bispectrum matrix. This approach addresses the issues associated with traditional time–frequency analysis methods, such as the large computational burden and lack of specificity in feature extraction. The integration results, Ix and Iy, of DBIB on different frequency axes are calculated using cross-entropy to derive HaAgEn. It is verified that HaAgEn is significantly more sensitive to harmonic components in the signal compared to other types of entropy, thereby better addressing harmonic detection issues and reducing feature redundancy. The detection accuracy of harmonic components in the shaft-rate electromagnetic field signal, as evidenced by sea trial data, reaches 96.8%, which is significantly higher than that of other detection methods. This provides a novel technical approach for addressing the issue of harmonic detection in industrial applications. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

14 pages, 1922 KiB  
Article
Asymmetric Protocols for Mode Pairing Quantum Key Distribution with Finite-Key Analysis
by Zhenhua Li, Tianqi Dou, Yuheng Xie, Weiwen Kong, Yang Liu, Haiqiang Ma and Jianjun Tang
Entropy 2025, 27(7), 737; https://doi.org/10.3390/e27070737 - 9 Jul 2025
Viewed by 175
Abstract
The mode pairing quantum key distribution (MP-QKD) protocol has attracted considerable attention for its capability to ensure high secure key rates over long distances without requiring global phase locking. However, ensuring symmetric channels for the MP-QKD protocol is challenging in practical quantum communication [...] Read more.
The mode pairing quantum key distribution (MP-QKD) protocol has attracted considerable attention for its capability to ensure high secure key rates over long distances without requiring global phase locking. However, ensuring symmetric channels for the MP-QKD protocol is challenging in practical quantum communication networks. Previous studies on the asymmetric MP-QKD protocol have relied on ideal decoy state assumptions and infinite-key analysis, which are unattainable for real-world deployment. In this paper, we conduct a security analysis of the asymmetric MP-QKD protocol with the finite-key analysis, where we discard the previously impractical assumptions made in the decoy state method. Combined with statistical fluctuation analysis, we globally optimized the 10 independent parameters in the asymmetric MP-QKD protocol by employing our modified particle swarm optimization. Through further analysis, the simulation results demonstrate that our work achieves improved secure key rates and transmission distances compared to the strategy with additional attenuation. We further investigate the relationship between the intensities and probabilities of signal, decoy, and vacuum states with transmission distance, facilitating their more efficient deployment in future quantum networks. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

26 pages, 3087 KiB  
Article
Pre-Warning for the Remaining Time to Alarm Based on Variation Rates and Mixture Entropies
by Zijiang Yang, Jiandong Wang, Honghai Li and Song Gao
Entropy 2025, 27(7), 736; https://doi.org/10.3390/e27070736 - 9 Jul 2025
Viewed by 172
Abstract
Alarm systems play crucial roles in industrial process safety. To support tackling the accident that is about to occur after an alarm, a pre-warning method is proposed for a special class of industrial process variables to alert operators about the remaining time to [...] Read more.
Alarm systems play crucial roles in industrial process safety. To support tackling the accident that is about to occur after an alarm, a pre-warning method is proposed for a special class of industrial process variables to alert operators about the remaining time to alarm. The main idea of the proposed method is to estimate the remaining time to alarm based on variation rates and mixture entropies of qualitative trends in univariate variables. If the remaining time to alarm is no longer than the pre-warning threshold and its mixture entropy is small enough then a warning is generated to alert the operators. One challenge for the proposed method is how to determine an optimal pre-warning threshold by considering the uncertainties induced by the sample distribution of the remaining time to alarm, subject to the constraint of the required false warning rate. This challenge is addressed by utilizing Bayesian estimation theory to estimate the confidence intervals for all candidates of the pre-warning threshold, and the optimal one is selected as the one whose upper bound of the confidence interval is nearest to the required false warning rate. Another challenge is how to measure the possibility of the current trend segment increasing to the alarm threshold, and this challenge is overcome by adopting the mixture entropy as a possibility measurement. Numerical and industrial examples illustrate the effectiveness of the proposed method and the advantages of the proposed method over the existing methods. Full article
(This article belongs to the Special Issue Failure Diagnosis of Complex Systems)
Show Figures

Figure 1

Previous Issue
Back to TopTop