Journal Description
Entropy
Entropy
is an international and interdisciplinary peer-reviewed open access journal of entropy and information studies, published monthly online by MDPI. The International Society for the Study of Information (IS4SI) and Spanish Society of Biomedical Engineering (SEIB) are affiliated with Entropy and their members receive a discount on the article processing charge.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, SCIE (Web of Science), Inspec, PubMed, PMC, Astrophysics Data System, and other databases.
- Journal Rank: JCR - Q2 (Physics, Multidisciplinary) / CiteScore - Q1 (Mathematical Physics)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 21.8 days after submission; acceptance to publication is undertaken in 2.6 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Testimonials: See what our editors and authors say about Entropy.
- Companion journals for Entropy include: Foundations, Thermo and Complexities.
Impact Factor:
2.0 (2024);
5-Year Impact Factor:
2.2 (2024)
Latest Articles
Three-Dimensional Sound Source Localization with Microphone Array Combining Spatial Entropy Quantification and Machine Learning Correction
Entropy 2025, 27(9), 942; https://doi.org/10.3390/e27090942 (registering DOI) - 9 Sep 2025
Abstract
In recent years, with the popularization of intelligent scene monitoring, sound source localization (SSL) has become a major means for indoor monitoring and target positioning. However, existing sound source localization solutions are difficult to extend to multi-source and three-dimensional scenarios. To address this,
[...] Read more.
In recent years, with the popularization of intelligent scene monitoring, sound source localization (SSL) has become a major means for indoor monitoring and target positioning. However, existing sound source localization solutions are difficult to extend to multi-source and three-dimensional scenarios. To address this, this paper proposes a three-dimensional sound source localization technology based on eight microphones. Specifically, the method employs a rectangular eight-microphone array and captures Direction-of-Arrival (DOA) information via the direct path relative transfer function (DP-RTF). It introduces spatial entropy to quantify the uncertainty caused by the exponentially growing DOA combinations as the number of sound sources increases, while further reducing the spatial entropy of sound source localization through geometric intersection. This solves the problem that traditional sound source localization methods cannot be applied to multi-source and three-dimensional scenarios. On the other hand, machine learning is used to eliminate coordinate deviations caused by DOA estimation errors of the direct path relative transfer function (DP-RTF) and deviations in microphone geometric parameters. Both simulation experiments and real-scene experiments show that the positioning error of the proposed method in three-dimensional scenarios is about 10.0 cm.
Full article
(This article belongs to the Special Issue Methods in Artificial Intelligence and Information Processing, Third Edition)
Open AccessArticle
Federated Learning over MU-MIMO Vehicular Networks
by
Maria Raftopoulou, José Mairton B. da Silva, Jr., Remco Litjens, H. Vincent Poor and Piet Van Mieghem
Entropy 2025, 27(9), 941; https://doi.org/10.3390/e27090941 (registering DOI) - 9 Sep 2025
Abstract
Many algorithms related to vehicular applications, such as enhanced perception of the environment, benefit from frequent updates and the use of data from multiple vehicles. Federated learning is a promising method to improve the accuracy of algorithms in the context of vehicular networks.
[...] Read more.
Many algorithms related to vehicular applications, such as enhanced perception of the environment, benefit from frequent updates and the use of data from multiple vehicles. Federated learning is a promising method to improve the accuracy of algorithms in the context of vehicular networks. However, limited communication bandwidth, varying wireless channel quality, and potential latency requirements may impact the number of vehicles selected for training per communication round and their assigned radio resources. In this work, we characterize the vehicles participating in federated learning based on their importance to the learning process and their use of wireless resources. We then address the joint vehicle selection and resource allocation problem, considering multi-cell networks with multi-user multiple-input multiple-output (MU-MIMO)-capable base stations and vehicles. We propose a “vehicle-beam-iterative” algorithm to approximate the solution to the resulting optimization problem. We then evaluate its performance through extensive simulations, using realistic road and mobility models, for the task of object classification of European traffic signs. Our results indicate that MU-MIMO improves the convergence time of the global model. Moreover, the application-specific accuracy targets are reached faster in scenarios where the vehicles have the same training data set sizes than in scenarios where the data set sizes differ.
Full article
Open AccessArticle
Determining the Upper-Bound on the Code Distance of Quantum Stabilizer Codes Through the Monte Carlo Method Based on Fully Decoupled Belief Propagation
by
Zhipeng Liang, Zicheng Wang, Zhengzhong Yi, Fusheng Yang and Xuan Wang
Entropy 2025, 27(9), 940; https://doi.org/10.3390/e27090940 (registering DOI) - 9 Sep 2025
Abstract
The code distance is a critical parameter of quantum stabilizer codes (QSCs), and determining it—whether exactly or approximately—is known to be an NP-complete problem. However, its upper bound can be determined efficiently by some methods such as the Monte Carlo method. Leveraging the
[...] Read more.
The code distance is a critical parameter of quantum stabilizer codes (QSCs), and determining it—whether exactly or approximately—is known to be an NP-complete problem. However, its upper bound can be determined efficiently by some methods such as the Monte Carlo method. Leveraging the Monte Carlo method, we propose an algorithm to compute the upper bound on the code distance of a given QSC using fully decoupled belief propagation combined with ordered statistics decoding (FDBP-OSD). Our algorithm demonstrates high precision: for various QSCs with known distances, the computed upper bounds match the actual values. Additionally, we explore upper bounds for the minimum weight of logical X operators in the Z-type Tanner-graph-recursive-expansion (Z-TGRE) code and the Chamon code—an XYZ product code constructed from three repetition codes. The results on Z-TGRE codes align with theoretical analysis, while the results on Chamon codes suggest that XYZ product codes may achieve a code distance of , which supports the conjecture of Leverrier et al.
Full article
(This article belongs to the Special Issue Quantum Error Correction and Fault-Tolerance)
►▼
Show Figures

Figure 1
Open AccessArticle
Multivariate Time Series Anomaly Detection Based on Inverted Transformer with Multivariate Memory Gate
by
Yuan Ma, Weiwei Liu, Changming Xu, Luyi Bai, Ende Zhang and Junwei Wang
Entropy 2025, 27(9), 939; https://doi.org/10.3390/e27090939 (registering DOI) - 8 Sep 2025
Abstract
In the industrial IoT, it is vital to detect anomalies in multivariate time series, yet it faces numerous challenges, including highly imbalanced datasets, complex and high-dimensional data, and large disparities across variables. Despite the recent surge in proposals for deep learning-based methods, these
[...] Read more.
In the industrial IoT, it is vital to detect anomalies in multivariate time series, yet it faces numerous challenges, including highly imbalanced datasets, complex and high-dimensional data, and large disparities across variables. Despite the recent surge in proposals for deep learning-based methods, these approaches typically treat the multivariate data at each point in time as a unique token, weakening the personalized features and dependency relationships between variables. As a result, their performance tends to degrade under highly imbalanced conditions, and reconstruction-based models are prone to overfitting abnormal patterns, leading to excessive reconstruction of anomalous inputs. In this paper, we propose ITMMG, an inverted Transformer with a multivariate memory gate. ITMMG employs an inverted token embedding strategy and multivariate memory to capture deep dependencies among variables and the normal patterns of individual variables. The experimental results obtained demonstrate that the proposed method exhibits superior performance in terms of detection accuracy and robustness compared with existing baseline methods across a range of standard time series anomaly detection datasets. This significantly reduces the probability of misclassifying anomalous samples during reconstruction.
Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
►▼
Show Figures

Figure 1
Open AccessArticle
GNSS Interference Identification Driven by Eye Pattern Features: ICOA–CNN–ResNet–BiLSTM Optimized Deep Learning Architecture
by
Chuanyu Wu, Yuanfa Ji and Xiyan Sun
Entropy 2025, 27(9), 938; https://doi.org/10.3390/e27090938 (registering DOI) - 7 Sep 2025
Abstract
In this study, the key challenges faced by global navigation satellite systems (GNSSs) in the field of security are addressed, and an eye diagram-based deep learning framework for intelligent classification of interference types is proposed. GNSS signals are first transformed into two-dimensional eye
[...] Read more.
In this study, the key challenges faced by global navigation satellite systems (GNSSs) in the field of security are addressed, and an eye diagram-based deep learning framework for intelligent classification of interference types is proposed. GNSS signals are first transformed into two-dimensional eye diagrams, enabling a novel visual representation wherein interference types are distinguished through entropy-centric feature analysis. Specifically, the quantification of information entropy within these diagrams serves as a theoretical foundation for extracting salient discriminative features, reflecting the structural complexity and uncertainty of the underlying signal distortions. We designed a hybrid architecture that integrates spatial feature extraction, gradient stability enhancement, and time dynamics modeling capabilities and combines the advantages of a convolutional neural network, residual network, and bidirectional long short-term memory network. To further improve model performance, we propose an improved coati optimization algorithm (ICOA), which combines chaotic mapping, an elite perturbation mechanism, and an adaptive weighting strategy for hyperparameter optimization. Compared with mainstream optimization methods, this algorithm improves the convergence accuracy by more than 30%. Experimental results on jamming datasets (continuous wave interference, chirp interference, pulse interference, frequency-modulated interference, amplitude-modulated interference, and spoofing interference) demonstrate that our method achieved performance in terms of accuracy, precision, recall, F1 score, and specificity, with values of 98.02%, 97.09%, 97.24%, 97.14%, and 99.65%, respectively, which represent improvements of 1.98%, 2.80%, 6.10%, 4.59%, and 0.33% over the next-best model. This study provides an efficient, entropy-aware, intelligent, and practically feasible solution for GNSS interference identification.
Full article
(This article belongs to the Section Signal and Data Analysis)
►▼
Show Figures

Figure 1
Open AccessArticle
3V-GM: A Tri-Layer “Point–Line–Plane” Critical Node Identification Algorithm for New Power Systems
by
Yuzhuo Dai, Min Zhao, Gengchen Zhang and Tianze Zhao
Entropy 2025, 27(9), 937; https://doi.org/10.3390/e27090937 (registering DOI) - 7 Sep 2025
Abstract
With the increasing penetration of renewable energy, the stochastic and intermittent nature of its generation increases operational uncertainty and vulnerability, posing significant challenges for grid stability. However, traditional algorithms typically identify critical nodes by focusing solely on the network topology or power flow,
[...] Read more.
With the increasing penetration of renewable energy, the stochastic and intermittent nature of its generation increases operational uncertainty and vulnerability, posing significant challenges for grid stability. However, traditional algorithms typically identify critical nodes by focusing solely on the network topology or power flow, or by combining the two, which leads to the inaccurate and incomplete identification of essential nodes. To address this, we propose the Three-Dimensional Value-Based Gravity Model (3V-GM), which integrates structural and electrical–physical attributes across three layers. In the plane layer, we combine each node’s global topological position with its real-time supply–demand voltage state. In the line layer, we introduce an electrical coupling distance to quantify the strength of electromagnetic interactions between nodes. In the point layer, we apply eigenvector centrality to detect latent hub nodes whose influence is not immediately apparent. The performance of our proposed method was evaluated by examining the change in the load loss rate as nodes were sequentially removed. To assess the effectiveness of the 3V-GM approach, simulations were conducted on the IEEE 39 system, as well as six other benchmark networks. The simulations were performed using Python scripts, with operational parameters such as bus voltages, active and reactive power flows, and branch impedances obtained from standard test cases provided by MATPOWER v7.1. The results consistently show that removing the same number of nodes identified by 3V-GM leads to a greater load loss compared to the six baseline methods. This demonstrates the superior accuracy and stability of our approach. Additionally, an ablation experiment, which decomposed and recombined the three layers, further highlights the unique contribution of each component to the overall performance.
Full article
(This article belongs to the Section Complexity)
►▼
Show Figures

Figure 1
Open AccessArticle
Infodemic Source Detection with Information Flow: Foundations and Scalable Computation
by
Zimeng Wang, Chao Zhao, Qiaoqiao Zhou, Chee Wei Tan and Chung Chan
Entropy 2025, 27(9), 936; https://doi.org/10.3390/e27090936 (registering DOI) - 6 Sep 2025
Abstract
We consider the problem of identifying the source of a rumor in a network, given only a snapshot observation of infected nodes after the rumor has spread. Classical approaches, such as the maximum likelihood (ML) and joint maximum likelihood (JML) estimators based on
[...] Read more.
We consider the problem of identifying the source of a rumor in a network, given only a snapshot observation of infected nodes after the rumor has spread. Classical approaches, such as the maximum likelihood (ML) and joint maximum likelihood (JML) estimators based on the conventional Susceptible–Infectious (SI) model, exhibit degeneracy, failing to uniquely identify the source even in simple network structures. To address these limitations, we propose a generalized estimator that incorporates independent random observation times. To capture the structure of information flow beyond graphs, our formulations consider rate constraints on the rumor and the multicast capacities for cyclic polylinking networks. Furthermore, we develop forward elimination and backward search algorithms for rate-constrained source detection and validate their effectiveness and scalability through comprehensive simulations. Our study establishes a rigorous and scalable foundation on the infodemic source detection.
Full article
(This article belongs to the Special Issue Applications of Information Theory to Machine Learning)
►▼
Show Figures

Figure 1
Open AccessArticle
Application of the Three-Group Model to the 2024 US Elections
by
Miron Kaufman, Sanda Kaufman and Hung T. Diep
Entropy 2025, 27(9), 935; https://doi.org/10.3390/e27090935 (registering DOI) - 6 Sep 2025
Abstract
Political polarization in Western democracies has accelerated in the last decade, with negative social consequences. Research across disciplines on antecedents, manifestations and societal impacts is hindered by social systems’ complexity: their constant flux impedes tracing causes of observed trends and prediction of consequences,
[...] Read more.
Political polarization in Western democracies has accelerated in the last decade, with negative social consequences. Research across disciplines on antecedents, manifestations and societal impacts is hindered by social systems’ complexity: their constant flux impedes tracing causes of observed trends and prediction of consequences, hampering their mitigation. Social physics models exploit a characteristic of complex systems: what seems chaotic at one observation level may exhibit patterns at a higher level. Therefore, dynamic modeling of complex systems allows anticipation of possible events. We use this approach to anticipate 2024 US election results. We consider the highly polarized Democrats and Republicans, and Independents fluctuating between them. We generate average group-stance scenarios in time and explore how polarization and depolarization might have affected 2024 voting outcomes. We find that reducing polarization might advantage the larger voting group. We also explore ways to reduce polarization, and their potential effects on election results. The results inform regarding the perils of polarization trends, and on possibilities of changing course.
Full article
(This article belongs to the Special Issue Computational and Statistical Physics Approaches for Complex Systems and Social Phenomena, 3rd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
Time-Varying Autoregressive Models: A Novel Approach Using Physics-Informed Neural Networks
by
Zhixuan Jia and Chengcheng Zhang
Entropy 2025, 27(9), 934; https://doi.org/10.3390/e27090934 - 4 Sep 2025
Abstract
►▼
Show Figures
Time series models are widely used to examine temporal dynamics and uncover patterns across diverse fields. A commonly employed approach for modeling such data is the (Vector) Autoregressive (AR/VAR) model, in which each variable is represented as a linear combination of its own
[...] Read more.
Time series models are widely used to examine temporal dynamics and uncover patterns across diverse fields. A commonly employed approach for modeling such data is the (Vector) Autoregressive (AR/VAR) model, in which each variable is represented as a linear combination of its own and others’ lagged values. However, the traditional (V)AR framework relies on the key assumption of stationarity, that autoregressive coefficients remain constant over time, which is often violated in practice, especially in systems affected by structural breaks, seasonal fluctuations, or evolving causal mechanisms. To overcome this limitation, Time-Varying (Vector) Autoregressive (TV-AR/TV-VAR) models have been developed, enabling model parameters to evolve over time and thus better capturing non-stationary behavior. Conventional approaches to estimating such models, including generalized additive modeling and kernel smoothing techniques, often require strong assumptions about basis functions, which can restrict their flexibility and applicability. To address these challenges, we introduce a novel framework that leverages physics-informed neural networks (PINN) to model TV-AR/TV-VAR processes. The proposed method extends the PINN framework to time series analysis by reducing reliance on explicitly defined physical structures, thereby broadening its applicability. Its effectiveness is validated through simulations on synthetic data and an empirical study of real-world health-related time series.
Full article

Figure 1
Open AccessArticle
Ascertaining Susceptibilities in Smart Contracts: A Quantum Machine Learning Approach
by
Amulyashree Sridhar, Kalyan Nagaraj, Shambhavi Bangalore Ravi and Sindhu Kurup
Entropy 2025, 27(9), 933; https://doi.org/10.3390/e27090933 - 4 Sep 2025
Abstract
The current research aims to discover applications of QML approaches in realizing liabilities within smart contracts. These contracts are essential commodities of the blockchain interface and are also decisive in developing decentralized products. But liabilities in smart contracts could result in unfamiliar system
[...] Read more.
The current research aims to discover applications of QML approaches in realizing liabilities within smart contracts. These contracts are essential commodities of the blockchain interface and are also decisive in developing decentralized products. But liabilities in smart contracts could result in unfamiliar system failures. Presently, static detection tools are utilized to discover accountabilities. However, they could result in instances of false narratives due to their dependency on predefined rules. In addition, these policies can often be superseded, failing to generalize on new contracts. The detection of liabilities with ML approaches, correspondingly, has certain limitations with contract size due to storage and performance issues. Nevertheless, employing QML approaches could be beneficial as they do not necessitate any preconceived rules. They often learn from data attributes during the training process and are employed as alternatives to ML approaches in terms of storage and performance. The present study employs four QML approaches, namely, QNN, QSVM, VQC, and QRF, for discovering susceptibilities. Experimentation revealed that the QNN model surpasses other approaches in detecting liabilities, with a performance accuracy of 82.43%. To further validate its feasibility and performance, the model was assessed on a several-partition test dataset, i.e., SolidiFI data, and the outcomes remained consistent. Additionally, the performance of the model was statistically validated using McNemar’s test.
Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
►▼
Show Figures

Figure 1
Open AccessArticle
Channel Estimation for Intelligent Reflecting Surface Empowered Coal Mine Wireless Communication Systems
by
Yang Liu, Kaikai Guo, Xiaoyue Li, Bin Wang and Yanhong Xu
Entropy 2025, 27(9), 932; https://doi.org/10.3390/e27090932 - 4 Sep 2025
Abstract
The confined space of coal mines characterized by curved tunnels with rough surfaces and a variety of deployed production equipment induces severe signal attenuation and interruption, which significantly degrades the accuracy of conventional channel estimation algorithms applied in coal mine wireless communication systems.
[...] Read more.
The confined space of coal mines characterized by curved tunnels with rough surfaces and a variety of deployed production equipment induces severe signal attenuation and interruption, which significantly degrades the accuracy of conventional channel estimation algorithms applied in coal mine wireless communication systems. To address these challenges, we propose a modified Bilinear Generalized Approximate Message Passing (mBiGAMP) algorithm enhanced by intelligent reflecting surface (IRS) technology to improve channel estimation accuracy in coal mine scenarios. Due to the presence of abundant coal-carrying belt conveyors, we establish a hybrid channel model integrating both fast-varying and quasi-static components to accurately model the unique propagation environment in coal mines. Specifically, the fast-varying channel captures the varying signal paths affected by moving conveyors, while the quasi-static channel represents stable direct links. Since this hybrid structure necessitates an augmented factor graph, we introduce two additional factor nodes and variable nodes to characterize the distinct message-passing behaviors and then rigorously derive the mBiGAMP algorithm. Simulation results demonstrate that the proposed mBiGAMP algorithm achieves superior channel estimation accuracy in dynamic conveyor-affected coal mine scenarios compared with other state-of-the-art methods, showing significant improvements in both separated and cascaded channel estimation. Specifically, when the NMSE is , the SNR of mBiGAMP is improved by approximately 5 dB, 6 dB, and 14 dB compared with the Dual-Structure Orthogonal Matching Pursuit (DS-OMP), Parallel Factor (PARAFAC), and Least Squares (LS) algorithms, respectively. We also verify the convergence behavior of the proposed mBiGAMP algorithm across the operational signal-to-noise ratios range. Furthermore, we investigate the impact of the number of pilots on the channel estimation performance, which reveals that the proposed mBiGAMP algorithm consumes fewer number of pilots to accurately recover channel state information than other methods while preserving estimation fidelity.
Full article
(This article belongs to the Special Issue Wireless Communications: Signal Processing Perspectives, 2nd Edition)
►▼
Show Figures

Figure 1
Open AccessReview
Sherlock Holmes Doesn’t Play Dice: The Mathematics of Uncertain Reasoning When Something May Happen, That You Are Not Even Able to Figure Out
by
Guido Fioretti
Entropy 2025, 27(9), 931; https://doi.org/10.3390/e27090931 - 4 Sep 2025
Abstract
While Evidence Theory (also known as Dempster–Shafer Theory, or Belief Functions Theory) is being increasingly used in data fusion, its potentialities in the Social and Life Sciences are often obscured by lack of awareness of its distinctive features. In particular, with this paper
[...] Read more.
While Evidence Theory (also known as Dempster–Shafer Theory, or Belief Functions Theory) is being increasingly used in data fusion, its potentialities in the Social and Life Sciences are often obscured by lack of awareness of its distinctive features. In particular, with this paper I stress that an extended version of Evidence Theory can express the uncertainty deriving from the fear that events may materialize, that one is not even able to figure out. By contrast, Probability Theory must limit itself to the possibilities that a decision-maker is currently envisaging. I compare this extended version of Evidence Theory to cutting-edge extensions of Probability Theory, such as imprecise and sub-additive probabilities, as well as unconventional versions of Information Theory that are employed in data fusion and transmission of cultural information. A possible application to creative usage of Large Language Models is outlined, and further extensions to multi-agent interactions are outlined.
Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
►▼
Show Figures

Figure 1
Open AccessArticle
Optimized Generalized LDPC Convolutional Codes
by
Li Deng, Kai Tao, Zhiping Shi, You Zhang, Yinlong Shi, Jian Wang, Tian Liu and Yongben Wang
Entropy 2025, 27(9), 930; https://doi.org/10.3390/e27090930 - 4 Sep 2025
Abstract
In this paper, some optimized encoding and decoding schemes are proposed for the generalized LDPC convolutional codes (GLDPC–CCs). In terms of the encoding scheme, a flexible doping method is proposed, which replaces multiple single parity check (SPC) nodes with one generalized check (GC)
[...] Read more.
In this paper, some optimized encoding and decoding schemes are proposed for the generalized LDPC convolutional codes (GLDPC–CCs). In terms of the encoding scheme, a flexible doping method is proposed, which replaces multiple single parity check (SPC) nodes with one generalized check (GC) node. Different types of BCH codes can be selected as the GC node by adjusting the number of SPC nodes to be replaced. Moreover, by fine-tuning the truncated bits and the extended parity check bits, or by reasonably adjusting the GC node distribution, the performance of GLDPC–CCs can be further improved. In terms of the decoding scheme, a hybrid layered normalized min-sum (HLNMS) decoding algorithm is proposed, where the layered normalized min-sum (LNMS) decoding is used for SPC nodes, and the Chase–Pyndiah decoding is adopted for GC nodes. Based on analysis of the decoding convergence of GC node and SPC node, an adaptive weight factor is designed for GC nodes that changes as the decoding iterations, aiming to further improve the decoding performance. In addition, an early stop decoding strategy is also proposed based on the minimum amplitude threshold of mutual information in order to reduce the decoding complexity. The simulation results have verified the superiority of the proposed scheme for GLDPC–CCs over the prior art, which has great application potential in optical communication systems.
Full article
(This article belongs to the Special Issue LDPC Codes for Communication Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
Comprehensive Examination of Unrolled Networks for Solving Linear Inverse Problems
by
Yuxi Chen, Xi Chen, Arian Maleki and Shirin Jalali
Entropy 2025, 27(9), 929; https://doi.org/10.3390/e27090929 - 3 Sep 2025
Abstract
Unrolled networks have become prevalent in various computer vision and imaging tasks. Although they have demonstrated remarkable efficacy in solving specific computer vision and computational imaging tasks, their adaptation to other applications presents considerable challenges. This is primarily due to the multitude of
[...] Read more.
Unrolled networks have become prevalent in various computer vision and imaging tasks. Although they have demonstrated remarkable efficacy in solving specific computer vision and computational imaging tasks, their adaptation to other applications presents considerable challenges. This is primarily due to the multitude of design decisions that practitioners working on new applications must navigate, each potentially affecting the network’s overall performance. These decisions include selecting the optimization algorithm, defining the loss function, and determining the deep architecture, among others. Compounding the issue, evaluating each design choice requires time-consuming simulations to train, fine-tune the neural network, and optimize its performance. As a result, the process of exploring multiple options and identifying the optimal configuration becomes time-consuming and computationally demanding. The main objectives of this paper are (1) to unify some ideas and methodologies used in unrolled networks to reduce the number of design choices a user has to make, and (2) to report a comprehensive ablation study to discuss the impact of each of the choices involved in designing unrolled networks and present practical recommendations based on our findings. We anticipate that this study will help scientists and engineers to design unrolled networks for their applications and diagnose problems within their networks efficiently.
Full article
(This article belongs to the Special Issue Advances in Information Theory and Machine Learning for Computational Imaging)
►▼
Show Figures

Figure 1
Open AccessArticle
Sensor Fusion for Target Detection Using LLM-Based Transfer Learning Approach
by
Yuval Ziv, Barouch Matzliach and Irad Ben-Gal
Entropy 2025, 27(9), 928; https://doi.org/10.3390/e27090928 - 3 Sep 2025
Abstract
This paper introduces a novel sensor fusion approach for the detection of multiple static and mobile targets by autonomous mobile agents. Unlike previous studies that rely on theoretical sensor models, which are considered as independent, the proposed methodology leverages real-world sensor data, which
[...] Read more.
This paper introduces a novel sensor fusion approach for the detection of multiple static and mobile targets by autonomous mobile agents. Unlike previous studies that rely on theoretical sensor models, which are considered as independent, the proposed methodology leverages real-world sensor data, which is transformed into sensor-specific probability maps using object detection estimation for optical data and converting averaged point-cloud intensities for LIDAR based on a dedicated deep learning model before being integrated through a large language model (LLM) framework. We introduce a methodology based on LLM transfer learning (LLM-TLFT) to create a robust global probability map enabling efficient swarm management and target detection in challenging environments. The paper focuses on real data obtained from two types of sensors, light detection and ranging (LIDAR) sensors and optical sensors, and it demonstrates significant improvement in performance compared to existing methods (Independent Opinion Pool, CNN, GPT-2 with deep transfer learning) in terms of precision, recall, and computational efficiency, particularly in scenarios with high noise and sensor imperfections. The significant advantage of the proposed approach is the possibility to interpret a dependency between different sensors. In addition, a model compression using knowledge-based distillation was performed (distilled TLFT), which yielded satisfactory results for the deployment of the proposed approach to edge devices.
Full article
(This article belongs to the Special Issue Informational Coordinative and Teleological Control of Distributed and Multi Agent Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
Noise-Robust-Based Clock Parameter Estimation and Low-Overhead Time Synchronization in Time-Sensitive Industrial Internet of Things
by
Long Tang, Fangyan Li, Zichao Yu and Haiyong Zeng
Entropy 2025, 27(9), 927; https://doi.org/10.3390/e27090927 - 3 Sep 2025
Abstract
Time synchronization is critical for task-oriented and time-sensitive Industrial Internet of Things (IIoT) systems. Nevertheless, achieving high-precision synchronization with low communication overhead remains a key challenge due to the constrained resources of IIoT devices. In this paper, we propose a single-timestamp time synchronization
[...] Read more.
Time synchronization is critical for task-oriented and time-sensitive Industrial Internet of Things (IIoT) systems. Nevertheless, achieving high-precision synchronization with low communication overhead remains a key challenge due to the constrained resources of IIoT devices. In this paper, we propose a single-timestamp time synchronization scheme that significantly reduces communication overhead by utilizing the mechanism of AP to periodically collect sensor device data. The reduced communication overhead alleviates network congestion, which is essential for achieving low end-to-end latency in synchronized IIoT networks. Furthermore, to mitigate the impact of random delay noise on clock parameter estimation, we propose a noise-robust-based Maximum Likelihood Estimation (NR-MLE) algorithm that jointly optimizes synchronization accuracy and resilience to random delays. Specifically, we decompose the collected timestamp matrix into two low-rank matrices and use gradient descent to minimize reconstruction error and regularization, approximating the true signal and removing noise. The denoised timestamp matrix is then used to jointly estimate clock skew and offset via MLE, with the corresponding Cramér–Rao Lower Bounds (CRLBs) being derived. The simulation results demonstrate that the NR-MLE algorithm achieves a higher clock parameter estimation accuracy than conventional MLE and exhibits strong robustness against increasing noise levels.
Full article
(This article belongs to the Special Issue Task-Oriented Communications in Industrial IoT: Age of Information and Beyond)
►▼
Show Figures

Figure 1
Open AccessArticle
Benchmarking Static Analysis for PHP Applications Security
by
Jiazhen Zhao, Kailong Zhu, Canju Lu, Jun Zhao and Yuliang Lu
Entropy 2025, 27(9), 926; https://doi.org/10.3390/e27090926 - 3 Sep 2025
Abstract
►▼
Show Figures
PHP is the most widely used server-side programming language, but it remains highly susceptible to diverse classes of vulnerabilities. Static Application Security Testing (SAST) tools are commonly adopted for vulnerability detection; however, their evaluation lacks systematic criteria capable of quantifying information loss and
[...] Read more.
PHP is the most widely used server-side programming language, but it remains highly susceptible to diverse classes of vulnerabilities. Static Application Security Testing (SAST) tools are commonly adopted for vulnerability detection; however, their evaluation lacks systematic criteria capable of quantifying information loss and uncertainty in analysis. Existing approaches, often based on small real-world case sets or heuristic sampling, fail to control experimental entropy within test cases. This uncontrolled variability makes it difficult to measure the information gain provided by different tools and to accurately differentiate their performance under varying levels of structural and semantic complexity. In this paper, we have developed a systematic evaluation framework for PHP SAST tools, designed to provide accurate and comprehensive assessments of their vulnerability detection capabilities. The framework explicitly isolates key factors influencing data flow analysis, enabling evaluation over four progressive dimensions with controlled information diversity. Using a benchmark instance, we validate the framework’s feasibility and show how it reduces evaluation entropy, enabling the more reliable measurement of detection capabilities. Our results highlight the framework’s ability to reveal the limitations in current SAST tools, offering actionable insights for their future improvement.
Full article

Figure 1
Open AccessArticle
Structural Complexity as a Directional Signature of System Evolution: Beyond Entropy
by
Donglu Shi
Entropy 2025, 27(9), 925; https://doi.org/10.3390/e27090925 - 3 Sep 2025
Abstract
We propose a universal framework for understanding system evolution based on structural complexity, offering a directional signature that applies across physical, chemical, and biological domains. Unlike entropy, which is constrained by its definition in closed, equilibrium systems, we introduce Kolmogorov Complexity (KC) and
[...] Read more.
We propose a universal framework for understanding system evolution based on structural complexity, offering a directional signature that applies across physical, chemical, and biological domains. Unlike entropy, which is constrained by its definition in closed, equilibrium systems, we introduce Kolmogorov Complexity (KC) and Fractal Dimension (FD) as quantifiable, scalable metrics that capture the emergence of organized complexity in open, non-equilibrium systems. We examine two major classes of systems: (1) living systems, revisiting Schrödinger’s insight that biological growth may locally reduce entropy while increasing structural order, and (2) irreversible natural processes such as oxidation, diffusion, and material aging. We formalize a Universal Law: expressed as a non-decreasing function Ω(t) = α·KC(t) + β·FD(t), which parallels the Second Law of Thermodynamics but tracks the rise in algorithmic and geometric complexity. This framework integrates principles from complexity science, providing a robust, mathematically grounded lens for describing the directional evolution of systems across scales-from crystals to cognition.
Full article
(This article belongs to the Section Complexity)
Open AccessArticle
Learnable Convolutional Attention Network for Unsupervised Knowledge Graph Entity Alignment
by
Weishan Cai and Wenjun Ma
Entropy 2025, 27(9), 924; https://doi.org/10.3390/e27090924 - 3 Sep 2025
Abstract
The success of current entity alignment (EA) tasks largely depends on the supervision information provided by labeled data. Considering the cost of labeled data, most supervised methods are challenging to apply in practical scenarios. Therefore, an increasing number of works based on contrastive
[...] Read more.
The success of current entity alignment (EA) tasks largely depends on the supervision information provided by labeled data. Considering the cost of labeled data, most supervised methods are challenging to apply in practical scenarios. Therefore, an increasing number of works based on contrastive learning, active learning, or other deep learning techniques have been developed, to solve the performance bottleneck caused by the lack of labeled data. However, existing unsupervised EA methods still face certain limitations; either their modeling complexity is high or they fail to balance the effectiveness and practicality of alignment. To overcome these issues, we propose a learnable convolutional attention network for unsupervised entity alignment, named LCA-UEA. Specifically, LCA-UEA performs convolution operations before the attention mechanism, ensuring the acquisition of structural information and avoiding the superposition of redundant information. Then, to efficiently filter out invalid neighborhood information of aligned entities, LCA-UEA designs a relation structure reconstruction method based on potential matching relations, thereby enhancing the usability and scalability of the EA method. Notably, a similarity function based on consistency is proposed to better measure the similarity of candidate entity pairs. Finally, we conducted extensive experiments on three datasets of different sizes and types (cross-lingual and monolingual) to verify the superiority of LCA-UEA. Experimental results demonstrate that LCA-UEA significantly improved alignment accuracy, outperforming 25 supervised or unsupervised methods, and improving by 6.4% in Hits@1 over the best baseline in the best case.
Full article
(This article belongs to the Special Issue Entropy in Machine Learning Applications, 2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
Simulating Public Opinion: Comparing Distributional and Individual-Level Predictions from LLMs and Random Forests
by
Fernando Miranda and Pedro Paulo Balbi
Entropy 2025, 27(9), 923; https://doi.org/10.3390/e27090923 - 2 Sep 2025
Abstract
Understanding and modeling the flow of information in human societies is essential for capturing phenomena such as polarization, opinion formation, and misinformation diffusion. Traditional agent-based models often rely on simplified behavioral rules that fail to capture the nuanced and context-sensitive nature of human
[...] Read more.
Understanding and modeling the flow of information in human societies is essential for capturing phenomena such as polarization, opinion formation, and misinformation diffusion. Traditional agent-based models often rely on simplified behavioral rules that fail to capture the nuanced and context-sensitive nature of human decision-making. In this study, we explore the potential of Large Language Models (LLMs) as data-driven, high-fidelity agents capable of simulating individual opinions under varying informational conditions. Conditioning LLMs on real survey data from the 2020 American National Election Studies (ANES), we investigate their ability to predict individual-level responses across a spectrum of political and social issues in a zero-shot setting, without any training on the survey outcomes. Using Jensen–Shannon distance to quantify divergence in opinion distributions and F1-score to measure predictive accuracy, we compare LLM-generated simulations to those produced by a supervised Random Forest model. While performance at the individual level is comparable, LLMs consistently produce aggregate opinion distributions closer to the empirical ground truth. These findings suggest that LLMs offer a promising new method for simulating complex opinion dynamics and modeling the probabilistic structure of belief systems in computational social science.
Full article
(This article belongs to the Section Multidisciplinary Applications)
►▼
Show Figures

Figure 1

Journal Menu
► ▼ Journal Menu-
- Entropy Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Photography Exhibition
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal Browser-
arrow_forward_ios
Forthcoming issue
arrow_forward_ios Current issue - Vol. 27 (2025)
- Vol. 26 (2024)
- Vol. 25 (2023)
- Vol. 24 (2022)
- Vol. 23 (2021)
- Vol. 22 (2020)
- Vol. 21 (2019)
- Vol. 20 (2018)
- Vol. 19 (2017)
- Vol. 18 (2016)
- Vol. 17 (2015)
- Vol. 16 (2014)
- Vol. 15 (2013)
- Vol. 14 (2012)
- Vol. 13 (2011)
- Vol. 12 (2010)
- Vol. 11 (2009)
- Vol. 10 (2008)
- Vol. 9 (2007)
- Vol. 8 (2006)
- Vol. 7 (2005)
- Vol. 6 (2004)
- Vol. 5 (2003)
- Vol. 4 (2002)
- Vol. 3 (2001)
- Vol. 2 (2000)
- Vol. 1 (1999)
Highly Accessed Articles
Latest Books
E-Mail Alert
News
3 September 2025
Join Us at the MDPI at the University of Toronto Career Fair, 23 September 2025, Toronto, ON, Canada
Join Us at the MDPI at the University of Toronto Career Fair, 23 September 2025, Toronto, ON, Canada

1 September 2025
MDPI INSIGHTS: The CEO’s Letter #26 – CUJS, Head of Ethics, Open Peer Review, AIS 2025, Reviewer Recognition
MDPI INSIGHTS: The CEO’s Letter #26 – CUJS, Head of Ethics, Open Peer Review, AIS 2025, Reviewer Recognition
Topics
Topic in
Entropy, Algorithms, Computation, Fractal Fract
Computational Complex Networks
Topic Editors: Alexandre G. Evsukoff, Yilun ShangDeadline: 30 September 2025
Topic in
AI, Energies, Entropy, Sustainability
Game Theory and Artificial Intelligence Methods in Sustainable and Renewable Energy Power Systems
Topic Editors: Lefeng Cheng, Pei Zhang, Anbo MengDeadline: 31 October 2025
Topic in
Entropy, Environments, Land, Remote Sensing
Bioterraformation: Emergent Function from Systemic Eco-Engineering
Topic Editors: Matteo Convertino, Jie LiDeadline: 30 November 2025
Topic in
Applied Sciences, Electronics, Entropy, Mathematics, Symmetry, Technologies, Chips
Quantum Information and Quantum Computing, 2nd Volume
Topic Editors: Durdu Guney, David PetrosyanDeadline: 6 January 2026

Conferences
Special Issues
Special Issue in
Entropy
Coding for Aeronautical Telemetry
Guest Editor: Michael RiceDeadline: 10 September 2025
Special Issue in
Entropy
Black Hole Information Problem: Challenges and Perspectives
Guest Editors: Qingyu Cai, Baocheng Zhang, Christian CordaDeadline: 10 September 2025
Special Issue in
Entropy
Quantum Computing with Trapped Ions
Guest Editors: Susan M. Clark, Phil RichermeDeadline: 15 September 2025
Special Issue in
Entropy
Information Hiding and Secret Sharing for New Carriers and Their Security Evaluation Methods
Guest Editors: Xuehu Yan, Peng LiDeadline: 15 September 2025
Topical Collections
Topical Collection in
Entropy
Algorithmic Information Dynamics: A Computational Approach to Causality from Cells to Networks
Collection Editors: Hector Zenil, Felipe Abrahão
Topical Collection in
Entropy
Foundations of Statistical Mechanics
Collection Editor: Antonio M. Scarfone
Topical Collection in
Entropy
Feature Papers in Information Theory
Collection Editors: Raúl Alcaraz, Luca Faes, Leandro Pardo, Boris Ryabko