Previous Issue
Volume 28, January
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 28, Issue 2 (February 2026) – 120 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
36 pages, 414 KB  
Tutorial
The Hitchhiker’s Guide to the Surface Code
by Fang Zhang and Jianxin Chen
Entropy 2026, 28(2), 251; https://doi.org/10.3390/e28020251 (registering DOI) - 22 Feb 2026
Abstract
Error correction is an essential part of the theory of quantum computation. However, new quantum computation students may find the theories of error correction and fault tolerance daunting, or they may be stuck with theoretical/outdated schemes (such as the one in the original [...] Read more.
Error correction is an essential part of the theory of quantum computation. However, new quantum computation students may find the theories of error correction and fault tolerance daunting, or they may be stuck with theoretical/outdated schemes (such as the one in the original proof of the threshold theorem by Aharonov and Ben-or) with unrealistically low thresholds and/or high overhead. In this article, we describe an adequately modern approach to fault-tolerant quantum computation based on the surface code and lattice surgery. The reader is assumed to have a basic understanding of quantum computation (state vectors, unitary gates, and measurements, etc.), but no prior knowledge about quantum codes or quantum error correction is needed. Full article
(This article belongs to the Special Issue Quantum Error Correction and Fault-Tolerance)
16 pages, 312 KB  
Article
On Gray Images of Cyclic and Self-Orthogonal Codes Over FquFqvFq++
by Sami H. Saif and Alhanouf Ali Alhomaidhi
Entropy 2026, 28(2), 250; https://doi.org/10.3390/e28020250 (registering DOI) - 22 Feb 2026
Abstract
Let p be a prime with p{2,5} and let q=pm. This paper studies cyclic and self-orthogonal linear codes of length n over the finite local non-Frobenius ring [...] Read more.
Let p be a prime with p{2,5} and let q=pm. This paper studies cyclic and self-orthogonal linear codes of length n over the finite local non-Frobenius ring Rp,u,v=Fq+uFq+vFq,u2=v2=uv=vu=0. We define an Fq-linear Gray map πn:Rp,u,vnFq6n and investigate the structural properties of Gray images of cyclic codes under this map. It is shown that πn preserves self-orthogonality and, when gcd(n,p)=1, transforms any cyclic code over Rp,u,v into a quasi-cyclic code over Fq of length 6n with index dividing 6. Moreover, we completely characterize the possible quasi-cyclic indices of the Gray images, proving that only the values l{1,3,6} can occur, and we establish necessary and sufficient conditions for each case in terms of the generators of the associated cyclic code. Several explicit examples are provided to illustrate the theoretical results and the resulting quasi-cyclic structures. Full article
(This article belongs to the Section Multidisciplinary Applications)
29 pages, 1965 KB  
Article
Unified Space–Time-Message Interference Alignment: An End-to-End Learning Approach
by Elaheh Sadeghabadi and Steven Blostein
Entropy 2026, 28(2), 249; https://doi.org/10.3390/e28020249 (registering DOI) - 21 Feb 2026
Abstract
This paper investigates the performance of a multi-user multiple-input single-output (MU-MISO) broadcast channel under the practical constraints of imperfect, delayed, and quantized channel state information at the transmitter (CSIT). Conventional interference alignment (IA) strategies—classified into spatial (SIA), temporal (TIA), and message-domain (MIA) techniques— [...] Read more.
This paper investigates the performance of a multi-user multiple-input single-output (MU-MISO) broadcast channel under the practical constraints of imperfect, delayed, and quantized channel state information at the transmitter (CSIT). Conventional interference alignment (IA) strategies—classified into spatial (SIA), temporal (TIA), and message-domain (MIA) techniques— typically designed for specific, idealized CSI regimes and often rely on successive interference cancellation (SIC) at the receiver. However, the iterative structure of SIC is highly susceptible to error propagation, particularly under CSI uncertainty and high-order modulation. We propose Deep-STMIA, a novel end-to-end deep learning framework that jointly optimizes interference management across the space, time, and message domains. Using a neural network-based autoencoder architecture with structural message-domain regularization, Deep-STMIA learns to mitigate the catastrophic effects of error propagation and adapts to a continuum of CSIT conditions. Simulation results demonstrate that Deep-STMIA matches the performance of degrees-of-freedom (DoF) optimal benchmarks in extreme CSI regimes and significantly outperforms state-of-the-art baselines, such as rate-splitting multiple access (RSMA), in practical imperfect CSIT scenarios. Full article
(This article belongs to the Special Issue Wireless Communications: Signal Processing Perspectives, 2nd Edition)
25 pages, 1618 KB  
Article
Energy-Efficient 3D Trajectory Optimization and Resource Allocation for UAV-Enabled ISAC Systems
by Lulu Jing, Hai Wang, Zhen Qin, Yicheng Zhao, Yi Zhu and Wensheng Zhao
Entropy 2026, 28(2), 248; https://doi.org/10.3390/e28020248 (registering DOI) - 21 Feb 2026
Abstract
Owing to their high flexibility, autonomous operation, and rapid deployment capability, unmanned aerial vehicles (UAVs) serve as effective aerial platforms for sensing and communication in remote and time-critical scenarios. However, their limited onboard energy budget poses a significant bottleneck for sustained operations. This [...] Read more.
Owing to their high flexibility, autonomous operation, and rapid deployment capability, unmanned aerial vehicles (UAVs) serve as effective aerial platforms for sensing and communication in remote and time-critical scenarios. However, their limited onboard energy budget poses a significant bottleneck for sustained operations. This paper investigates an energy-efficient UAV-assisted integrated sensing and communication (ISAC) system, aiming to maximize the sensing energy efficiency (SEE), defined as the ratio of the total radar estimation rate to the total energy consumption. Unlike prior works focused solely on rate maximization or fairness, our design jointly optimizes the UAV’s 3D trajectory, task scheduling, and power allocation under kinematic and coverage constraints to maximize the SEE. To solve the formulated non-convex fractional programming problem, we propose an efficient iterative algorithm based on the Dinkelbach method and block coordinate descent (BCD). Simulation results demonstrate that the proposed scheme achieves a superior trade-off between sensing performance and energy consumption. Full article
(This article belongs to the Special Issue Integrated Sensing and Communication (ISAC) in 6G)
20 pages, 310 KB  
Article
A Comparison of Algorithms to Achieve the Maximum Entropy in the Theory of Evidence
by Joaquín Abellán, Aina López-Gay, Maria Isabel A. Benítez and Francisco Javier G. Castellano
Entropy 2026, 28(2), 247; https://doi.org/10.3390/e28020247 (registering DOI) - 21 Feb 2026
Abstract
Within the framework of evidence theory, maximum entropy is regarded as a measure of total uncertainty that satisfies a comprehensive set of mathematical properties and behavioral requirements. However, its practical applicability is severely questioned due to the high computational complexity of its calculation, [...] Read more.
Within the framework of evidence theory, maximum entropy is regarded as a measure of total uncertainty that satisfies a comprehensive set of mathematical properties and behavioral requirements. However, its practical applicability is severely questioned due to the high computational complexity of its calculation, which involves the manipulation of the power set of the frame of discernment. In the literature, attempts have been made to reduce this complexity by restricting the computation to singleton elements, leading to a formulation based on reachable probability intervals. Although this approach relies on a less specific representation of evidential information, it has been shown to provide an equivalent maximum entropy value under certain conditions. In this paper, we present an experimental comparative study of two algorithms for calculating maximum entropy in evidence theory: the classical algorithm, which operates directly on belief functions, and an alternative algorithm based on reachable probability intervals. Through numerical experiments, we demonstrate that the differences between these approaches are less pronounced than previously suggested in the literature. Depending on the type of information representations to which it is applied, the original algorithm based on belief functions can be more efficient than the one using the reachable probability interval approach. This is an interesting result, and a reason for choosing one algorithm over the other depending on the situation. Full article
23 pages, 1084 KB  
Review
Molecular Dissipative Structuring: The Fundamental Creative Force in Biology
by Karo Michaelian
Entropy 2026, 28(2), 246; https://doi.org/10.3390/e28020246 - 20 Feb 2026
Viewed by 21
Abstract
The spontaneous emergence of macroscopic dissipative structures in systems driven by generalized chemical potentials is well established in non-equilibrium thermodynamics. Examples include atmospheric/oceanic currents, hurricanes and tornadoes, Rayleigh–Bénard convection cells and reaction–diffusion patterns. Less well recognized, however, are microscopic dissipative structures that form [...] Read more.
The spontaneous emergence of macroscopic dissipative structures in systems driven by generalized chemical potentials is well established in non-equilibrium thermodynamics. Examples include atmospheric/oceanic currents, hurricanes and tornadoes, Rayleigh–Bénard convection cells and reaction–diffusion patterns. Less well recognized, however, are microscopic dissipative structures that form when the driving potential excites internal molecular degrees of freedom (electronic states and nuclear coordinates), typically via high-energy photons or coupling with ATP. Examples include dynamic nanoscale lipid rafts, kinesin or dynein motors along microtubules, and spatiotemporal Ca2+ signaling waves propagating through the cytoplasm. The thermodynamic dissipation theory of the origin of life asserts that the core biomolecules of all three domains of life originated as self-organized molecular dissipative structures—chromophores or pigments—that proliferated on the Archean ocean surface to absorb and dissipate the intense “soft” UV-C (205–280 nm) and UV-B (280–315 nm) solar flux into heat. Thermodynamic coupling to ancillary antenna and surface-anchoring molecules subsequently increased photon dissipation and enabled more complex dissipative processes, including photosynthesis, to dissipate lower-energy but higher-intensity UV-A and visible light. Further thermodynamic coupling to abiotic geophysical cycles (e.g., the water cycle, winds, and ocean currents) ultimately led to today’s biosphere, efficiently dissipating the incident solar spectrum well into the infrared. This paper reviews historical considerations of UV light in life’s origin and our proposal of UV-C molecular dissipative structuring of three classes of fundamental biomolecules: nucleobases, fatty acids, and pigments. Increases in structural complexity and assembly into larger complexes are shown to be driven by the thermodynamic imperative of enhancing solar photon dissipation. We conclude that thermodynamic selection of dissipative structures, rather than Darwinian natural selection, is the fundamental creative force in biology at all levels of hierarchy. Full article
(This article belongs to the Special Issue Alive or Not Alive: Entropy and Living Things)
Show Figures

Figure 1

22 pages, 617 KB  
Article
Joint Sensing and Secure Communications in RIS-Based Symbiotic Radio Systems
by Junhong Yang and Ke-Wen Huang
Entropy 2026, 28(2), 245; https://doi.org/10.3390/e28020245 - 20 Feb 2026
Viewed by 30
Abstract
We study the problem of joint sensing and secure communications in a reconfigurable intelligent surface (RIS)-based symbiotic radio (SR) system. In the considered system, a dual-functional radar and communication base station (DFRC-BS) achieves secure communications with multiple user terminals (UTs), and at the [...] Read more.
We study the problem of joint sensing and secure communications in a reconfigurable intelligent surface (RIS)-based symbiotic radio (SR) system. In the considered system, a dual-functional radar and communication base station (DFRC-BS) achieves secure communications with multiple user terminals (UTs), and at the same time, performs a target sensing task. An RIS simultaneously assists the secure communications between the DFRC-BS and the multiple UTs and conveys its own data to the UTs by modulating the radio frequency signal from the DFRC-BS. Two different SR settings are investigated, namely, parasitic SR (PSR) and commensal SR (CSR). In both the PSR and the CSR situations, the echo signal from the sensing target is interfered by the backscattered signal from the RIS. We propose two strategies for the DFRC-BS to handle with the interference from the RIS, namely, (1) directly sensing without interference cancelation, and (2) performing interference cancelation before sensing. For both the two strategies, we aim to maximize the sum secrecy rate from the DFRC-BS to the multiple UTs while ensuring satisfactory performances for the sensing and the backscatter links. A block coordinate ascend algorithm is proposed to solve the established non-convex optimization problems. Simulation results reveal that at the DFRC-BS, performing interference cancelation leads to an improved system performance. Furthermore, compared with PSR, CSR leads to a higher sum secrecy rate between the DFRC-BS and the UTs. Full article
(This article belongs to the Special Issue Wireless Physical Layer Security Toward 6G)
28 pages, 401 KB  
Article
Crofton Risk and Relative Transactional Entropy
by Marcin Makowski and Edward W. Piotrowski
Entropy 2026, 28(2), 244; https://doi.org/10.3390/e28020244 - 20 Feb 2026
Viewed by 45
Abstract
We develop a geometric approach to financial risk based on Crofton’s idea and the tools of the Radon transform. The trajectory of a financial instrument is defined with respect to a frame of reference (money, benchmark). A central role is played by simple [...] Read more.
We develop a geometric approach to financial risk based on Crofton’s idea and the tools of the Radon transform. The trajectory of a financial instrument is defined with respect to a frame of reference (money, benchmark). A central role is played by simple instruments, inspired by the annual percentage rate (APR) concept, whose graphs in a fixed reference frame are line segments. Risk is interpreted transactionally as the density of exchange dilemmas that arise when the instrument’s trajectory intersects the trajectories of simple instruments. This perspective leads to a risk measure given by the trajectory length in the Crofton–Steinhaus sense. We also introduce new notions, such as geometric volatility, transactional entropy, and trajectory temperature, associated with the distribution of the number of intersections, enabling thermodynamic analogies to be incorporated into the description of risk and market complexity. Full article
21 pages, 3655 KB  
Article
Coupled Dynamics of Vaccination Behavior and Epidemic Spreading on Multilayer Higher-Order Networks
by Zhishuang Wang, Guoqiang Zeng, Qian Yin, Linyuan Guo and Zhiyong Hong
Entropy 2026, 28(2), 243; https://doi.org/10.3390/e28020243 - 20 Feb 2026
Viewed by 50
Abstract
Vaccination behavior and epidemic spreading are strongly intertwined processes, and their coevolution is often shaped by both individual decision-making and social interactions. However, most existing studies model such interactions at the pairwise level, overlooking the potential impact of higher-order social influence arising from [...] Read more.
Vaccination behavior and epidemic spreading are strongly intertwined processes, and their coevolution is often shaped by both individual decision-making and social interactions. However, most existing studies model such interactions at the pairwise level, overlooking the potential impact of higher-order social influence arising from group interactions. In this work, we develop a coupled vaccination–epidemic spreading model on multilayer higher-order networks, where vaccination behavior evolves on a simplicial complex and epidemic propagation occurs on a physical contact network. The model incorporates imperfect vaccine efficacy, allowing vaccinated individuals to become infected, and introduces a hybrid vaccination strategy that combines rational cost–benefit evaluation with social influence from both pairwise and higher-order interactions, as well as negative effects induced by vaccine failure. By constructing the coupled dynamical equations, we analytically derive the epidemic outbreak threshold and elucidate how higher-order interactions, behavioral responses, and vaccine-related parameters jointly affect epidemic dynamics. Numerical simulations on networks with different structural properties validate the theoretical results and reveal pronounced structure-dependent effects. The results show that higher-order social interactions can significantly reshape vaccination behavior and epidemic prevalence, while network heterogeneity and vaccine imperfection play crucial roles in determining the outbreak threshold and steady-state infection level. These results emphasize the necessity of incorporating higher-order interactions together with realistic vaccination behavior into epidemic modeling and offer new insights for the design of effective vaccination strategies. Full article
(This article belongs to the Special Issue Complexity of Social Networks)
24 pages, 4248 KB  
Article
Multi-Scale Feature Learning for Farmland Segmentation Under Complex Spatial Structures
by Yongqi Han, Yuqing Wang, Yun Zhang, Hongfu Ai, Chuan Qin and Xinle Zhang
Entropy 2026, 28(2), 242; https://doi.org/10.3390/e28020242 - 19 Feb 2026
Viewed by 96
Abstract
Fragmented, irregular, and scale-heterogeneous farmland parcels introduce high spatial complexity into high-resolution remote sensing imagery, leading to boundary ambiguity and inter-class spectral confusion that hinder effective feature discrimination in semantic segmentation. To address these challenges, we propose CSMNet, which adopts a ConvNeXt V2 [...] Read more.
Fragmented, irregular, and scale-heterogeneous farmland parcels introduce high spatial complexity into high-resolution remote sensing imagery, leading to boundary ambiguity and inter-class spectral confusion that hinder effective feature discrimination in semantic segmentation. To address these challenges, we propose CSMNet, which adopts a ConvNeXt V2 encoder for hierarchical representation learning and a multi-scale fusion architecture with redesigned skip connections and lateral outputs to reduce semantic gaps and preserve cross-scale information. An adaptive multi-head attention module dynamically integrates channel-wise, spatial, and global contextual cues through a lightweight gating mechanism, enhancing boundary awareness in structurally complex regions. To further improve robustness, a hybrid loss combining Binary Cross-Entropy and Dice loss is employed to alleviate class imbalance and ensure reliable extraction of small and fragmented parcels. Experimental results from Nong’an County demonstrate that the proposed model achieves superior performance compared with several state-of-the-art segmentation methods, attaining a Precision of 95.91%, a Recall of 93.95%, an F1-score of 94.92%, and an IoU of 90.85%. The IoU exceeds that of Unet++ by 8.92% and surpasses PSPNet, SegNet, DeepLabv3+, TransUNet, SeaFormer and SegMAN by more than 15%, 10%, 7%, 6%, 5% and 2%, respectively. These results indicate that CSMNet effectively improves information utilization and boundary delineation in complex agricultural landscapes. Full article
(This article belongs to the Section Multidisciplinary Applications)
21 pages, 1927 KB  
Article
A Dynamic Hybrid Weighting Framework for Teaching Effectiveness Evaluation in Multi-Criteria Decision-Making: Integrating Interval-Valued Intuitionistic Fuzzy AHP and Entropy Triggering
by Chengling Lu and Yanxue Zhang
Entropy 2026, 28(2), 241; https://doi.org/10.3390/e28020241 - 19 Feb 2026
Viewed by 161
Abstract
Multi-criteria decision-making (MCDM) problems in complex evaluation systems are often characterized by high uncertainty in expert judgments and dynamic variations in indicator importance. Traditional analytic hierarchy process (AHP) and entropy-based weighting methods typically suffer from two inherent limitations: the inability to explicitly quantify [...] Read more.
Multi-criteria decision-making (MCDM) problems in complex evaluation systems are often characterized by high uncertainty in expert judgments and dynamic variations in indicator importance. Traditional analytic hierarchy process (AHP) and entropy-based weighting methods typically suffer from two inherent limitations: the inability to explicitly quantify expert hesitation and the rigidity of static weight assignment under evolving data distributions. To address these challenges, this paper proposes a dynamic hybrid weighting framework that integrates an interval-valued intuitionistic fuzzy analytic hierarchy process (IVIF-AHP) with an entropy-triggered correction mechanism. First, interval-valued intuitionistic fuzzy numbers are employed to simultaneously model membership, non-membership, and hesitation degrees in pairwise comparisons, enabling a more comprehensive representation of expert uncertainty. Second, an entropy-triggered dynamic fusion strategy is developed by jointly incorporating information entropy and coefficient of variation, allowing adaptive adjustment between subjective expert weights and objective data-driven weights. This mechanism effectively enhances sensitivity to high-dispersion criteria while preserving expert knowledge in low-variability indicators. The proposed framework is formulated in a hierarchical fuzzy decision structure and implemented through a fuzzy comprehensive evaluation process. Its feasibility and robustness are validated through a concrete case study on teaching effectiveness evaluation for a university engineering course, leveraging multi-source data. Comparative analysis demonstrates that the proposed approach effectively mitigates the weight rigidity and evaluation inflation observed in conventional methods. Furthermore, it improves diagnostic resolution and decision stability across different evaluation periods. The results indicate that the proposed entropy-triggered IVIF-AHP framework provides a mathematically sound and practically applicable solution for dynamic MCDM problems under uncertainty, with strong potential for extension to other complex evaluation and decision-support systems. Full article
Show Figures

Figure 1

26 pages, 11745 KB  
Article
Robust Incipient Fault Diagnosis of Rolling Element Bearings Under Small-Sample Conditions Using Refined Multiscale Rating Entropy
by Shiqian Wu, Huiyu Liu and Liangliang Tao
Entropy 2026, 28(2), 240; https://doi.org/10.3390/e28020240 - 19 Feb 2026
Viewed by 66
Abstract
The operational reliability of aero-engines is critically dependent on the health of rolling element bearings, while incipient fault diagnosis remains particularly challenging under small-sample conditions. Although multiscale entropy methods are widely used for complexity analysis, conventional coarse-graining strategies suffer from severe information loss [...] Read more.
The operational reliability of aero-engines is critically dependent on the health of rolling element bearings, while incipient fault diagnosis remains particularly challenging under small-sample conditions. Although multiscale entropy methods are widely used for complexity analysis, conventional coarse-graining strategies suffer from severe information loss and unstable estimation when data are extremely limited. To address this, the primary objective of this study is to develop a robust diagnostic framework that ensures feature consistency and classification stability even with minimal training samples. Specifically, this paper proposes an integrated approach combining Refined Time-shifted Multiscale Rating Entropy (RTSMRaE) with an Animated Oat Optimization (AOO)-optimized Extreme Learning Machine (ELM). By introducing a refined time-shift operator and a dual-weight fusion mechanism, RTSMRaE effectively preserves transient impulsive features across multiple scales while suppressing stochastic fluctuations. Meanwhile, the AOO algorithm is employed to optimize the input weights and hidden biases of the ELM, alleviating performance instability caused by random initialization and improving generalization capability. Experimental validation on both laboratory-scale and real-world aviation bearing datasets demonstrates that the proposed RTSMRaE-AOO-ELM framework achieves a diagnostic accuracy of 99.47% with a standard deviation of ±0.48% using only five training samples per class. These results indicate that the proposed method offers superior diagnostic robustness and computational efficiency, providing a promising solution for intelligent condition monitoring in data-scarce industrial environments. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

14 pages, 367 KB  
Article
Dissipative Realization of a Quantum Distance-Based Classifier Using Open Quantum Walks
by Pedro Linck Maciel, Graeme Pleasance, Francesco Petruccione and Nadja K. Bernardes
Entropy 2026, 28(2), 239; https://doi.org/10.3390/e28020239 - 19 Feb 2026
Viewed by 71
Abstract
Open quantum walks (OQWs) constitute a class of quantum walks whose dynamics are entirely driven by interactions with the environment. It is well known that OQWs provide a general framework for implementing dissipative quantum computation. In this work, we demonstrate the feasibility of [...] Read more.
Open quantum walks (OQWs) constitute a class of quantum walks whose dynamics are entirely driven by interactions with the environment. It is well known that OQWs provide a general framework for implementing dissipative quantum computation. In this work, we demonstrate the feasibility of running the previously proposed quantum distance-based classifier within the open quantum walk computation model, and we show that its expected runtime remains finite even in the slower regime. Full article
(This article belongs to the Special Issue Open Quantum Systems Applied to Quantum Computation)
Show Figures

Figure 1

25 pages, 4068 KB  
Article
The Interplay Between Non-Instantaneous Dynamics of mRNA and Bounded Extrinsic Stochastic Perturbations for a Self-Enhancing Transcription Factor
by Lorenzo Cabriel, Giulio Caravagna, Sebastiano de Franciscis, Fabio Anselmi and Alberto D’Onofrio
Entropy 2026, 28(2), 238; https://doi.org/10.3390/e28020238 - 19 Feb 2026
Viewed by 97
Abstract
In this work, we consider a simple bistable motif constituted by a self-enhancing Transcription Factor (TF) and its mRNA with non-instantaneous dynamics. In particular, we mainly numerically investigated the impact of bounded stochastic perturbations of Sine–Wiener type affecting the degradation rate/binding rate constant [...] Read more.
In this work, we consider a simple bistable motif constituted by a self-enhancing Transcription Factor (TF) and its mRNA with non-instantaneous dynamics. In particular, we mainly numerically investigated the impact of bounded stochastic perturbations of Sine–Wiener type affecting the degradation rate/binding rate constant of the TF on the phase-like transitions of the system. We show that the intrinsic exponential delay in the TF positive feedback, due to the presence of a mRNA with slow dynamics, deeply affects the above-mentioned transitions for long but finite times. We also show that, in the case of more complex delays in the feedback and/or in the translation process, the impact of the extrinsic stochasticity is further amplified. We also briefly investigate the power-law behavior (PLB) of the averaged energy spectrum of the TF by showing that, in some cases, the PLB is simply due to the filtering nature of the motif. A similar analysis can also be applied to biological models having a qualitatively similar structure, such as the well-known Capasso and Paveri–Fontana model of cholera spreading. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

14 pages, 1606 KB  
Article
The Properties of Plasma Sheath Containing the Primary Electrons with a Cairns Distribution
by Yida Zhang and Jiulin Du
Entropy 2026, 28(2), 237; https://doi.org/10.3390/e28020237 - 18 Feb 2026
Viewed by 127
Abstract
We study the properties of a plasma sheath containing cold positive ions, secondary electrons, and primary electrons with a Cairns distribution (a non-thermal velocity distribution). We derive the generalized Bohm criterion and Bohm speed, the new floating potential at the wall, and the [...] Read more.
We study the properties of a plasma sheath containing cold positive ions, secondary electrons, and primary electrons with a Cairns distribution (a non-thermal velocity distribution). We derive the generalized Bohm criterion and Bohm speed, the new floating potential at the wall, and the new critical secondary electron emission coefficient. We show that these properties of the plasma sheath depend significantly on the α-parameter in the non-thermal α-distribution, and so they are generally different from those of the plasma sheath if the primary electrons were assumed to be a Maxwellian distribution. Full article
(This article belongs to the Special Issue Nonextensive Statistical Mechanics in Astrophysics)
22 pages, 390 KB  
Review
Word Sense Disambiguation with Wikipedia Entities: A Survey of Entity Linking Approaches
by Michael Angelos Simos and Christos Makris
Entropy 2026, 28(2), 236; https://doi.org/10.3390/e28020236 - 18 Feb 2026
Viewed by 112
Abstract
The inference of unstructured text semantics is a crucial preprocessing task for NLP and AI applications. Word sense disambiguation and entity linking tasks resolve ambiguous terms within unstructured text corpora to senses from a predefined knowledge source. Wikipedia has been one of the [...] Read more.
The inference of unstructured text semantics is a crucial preprocessing task for NLP and AI applications. Word sense disambiguation and entity linking tasks resolve ambiguous terms within unstructured text corpora to senses from a predefined knowledge source. Wikipedia has been one of the most popular sources due to its completeness, high link density, and multi-language support. In the context of chatbot-mediated consumption of information in recent years through implicit disambiguation and semantic representations in LLMs, Wikipedia remains an invaluable source and reference point. This survey covers methodologies for entity linking with Wikipedia, including early systems based on hyperlink statistics and semantic relatedness, methods using graph inference problem formalizations and graph label propagation algorithms, neural and contextual methods based on sense embeddings and transformers, and multimodal, cross-lingual, and cross-domain settings. Moreover, we cover semantic annotation workflows that facilitate the scaled-up use of Wikipedia-centric entity linking. We also provide an overview of the available datasets and evaluation measures. We discuss challenges such as partial coverage, NIL concepts, the level of sense definition, combining WSD and large-scale language models, as well as the complementary use of Wikidata. Full article
(This article belongs to the Special Issue Information Theoretic Learning with Its Applications)
47 pages, 633 KB  
Review
A Survey of Lattice-Based Physical-Layer Security for Wireless Systems with p-Modular Lattice Constructions
by Hassan Khodaiemehr, Khadijeh Bagheri, Amin Mohajer, Chen Feng, Daniel Panario and Victor C. M. Leung
Entropy 2026, 28(2), 235; https://doi.org/10.3390/e28020235 - 18 Feb 2026
Viewed by 95
Abstract
Physical-layer security (PLS) provides an information-theoretic framework for securing wireless communications by exploiting channel and signal-structure asymmetries, thereby avoiding reliance on computational hardness assumptions. Within this setting, lattice codes and their algebraic constructions play a central role in achieving secrecy over Gaussian and [...] Read more.
Physical-layer security (PLS) provides an information-theoretic framework for securing wireless communications by exploiting channel and signal-structure asymmetries, thereby avoiding reliance on computational hardness assumptions. Within this setting, lattice codes and their algebraic constructions play a central role in achieving secrecy over Gaussian and fading wiretap channels. This article offers a comprehensive survey of lattice-based wiretap coding, covering foundational concepts in algebraic number theory, Construction A over number fields, and the structure of modular and unimodular lattice families. We review key secrecy metrics, including secrecy gain, flatness factor, and equivocation, and consolidate classical and recent results to provide a unified perspective that links wireless-channel models with their underlying algebraic lattice structures. In addition, we review a newly proposed family of p-modular lattices in Khodaiemehr, H., 2018 constructed from cyclotomic fields Q(ζp) for primes p1(mod4) via a generalized Construction A framework. We characterize their algebraic and geometric properties and establish a non-existence theorem showing that such constructions cannot be extended to prime-power cyclotomic fields Q(ζpn) with n>1. Finally, motivated by the fact that these p-modular lattices naturally yield mixed-signature structures for which classical theta series diverge, we integrate recent advances on indefinite theta series and modular completions. Drawing on Vignéras’ differential framework and generalized error functions, we outline how modularly completed indefinite theta series provide a principled analytic foundation for defining secrecy-relevant quantities in the indefinite setting. Overall, this work serves both as a survey of algebraic lattice techniques for PLS and as a source of new design insights for secure wireless communication systems. Full article
(This article belongs to the Special Issue Wireless Communications: Signal Processing Perspectives, 2nd Edition)
33 pages, 999 KB  
Review
Review of Prognosis Approaches Applied to Power SiC MOSFETs for Health State and Remaining Useful Life Prediction
by Sanjiv Kumar, Bruno Allard, Malorie Hologne-Carpentier, Guy Clerc and François Auger
Entropy 2026, 28(2), 234; https://doi.org/10.3390/e28020234 - 17 Feb 2026
Viewed by 96
Abstract
The use of Silicon Carbide (SiC) MOSFETs significantly improves converter performance by increasing efficiency and reducing costs, to the detriment of electro-magnetic emission and reliability. Implementing a predictive maintenance strategy based on a prognosis tool can mitigate this limitation. This literature review offers [...] Read more.
The use of Silicon Carbide (SiC) MOSFETs significantly improves converter performance by increasing efficiency and reducing costs, to the detriment of electro-magnetic emission and reliability. Implementing a predictive maintenance strategy based on a prognosis tool can mitigate this limitation. This literature review offers a methodological synthesis of prognosis design tools for SiC MOSFETs, while also encompassing studies on IGBTs and silicon-based power MOSFETs where these approaches are transferable. The analysis focuses on wear-out prognosis under nominal operating conditions of standard package device, excluding environmental constraints. Articles published up to 2025 were identified in the OpenAlex database using a keyword-based search and manually filtered according to the study scope. Most reviewed works rely on Data-Based prognosis methods, mostly based on neural networks, though out-of-sample validation remains uncommon. Our study also highlights the dependence of Data-Based prognosis performance on the shape of degradation indicator trends. Moreover, the estimation of prediction uncertainty is rarely addressed in the reviewed literature. Despite notable methodological advances, ensuring the reliability of prognosis tools for SiC MOSFETs remains an ongoing research challenge. Full article
27 pages, 4075 KB  
Article
Outlier Detection in Functional Data Using Adjusted Outlyingness
by Zhenghui Feng, Xiaodan Hong, Yingxing Li, Xiaofei Song and Ketao Zhang
Entropy 2026, 28(2), 233; https://doi.org/10.3390/e28020233 - 16 Feb 2026
Viewed by 132
Abstract
In signal processing and information analysis, the detection and identification of anomalies present in signals constitute a critical research focus. Accurately discerning these deviations using probabilistic, statistical, and information-theoretic methods is essential for ensuring data integrity and supporting reliable downstream analysis. Outlier detection [...] Read more.
In signal processing and information analysis, the detection and identification of anomalies present in signals constitute a critical research focus. Accurately discerning these deviations using probabilistic, statistical, and information-theoretic methods is essential for ensuring data integrity and supporting reliable downstream analysis. Outlier detection in functional data aims to identify curves or trajectories that deviate significantly from the dominant pattern—a process vital for data cleaning and the discovery of anomalous events. This task is challenging due to the intrinsic infinite dimensionality of functional data, where outliers often appear as subtle shape deformations that are difficult to detect. Moving beyond conventional approaches that discretize curves into multivariate vectors, we introduce a novel framework that projects functional data into a low-dimensional space of meaningful features. This is achieved via a tailored weighting scheme designed to preserve essential curve variations. We then incorporate the Mahalanobis distance to detect directional outlyingness under non-Gaussian assumptions through a robustified bootstrap resampling method with data-driven threshold determination. Simulation studies validated its superior performance, demonstrating higher true positive and lower false positive rates across diverse anomaly types, including magnitude, shape-isolated, shape-persistent, and mixed outliers. The practical utility of our approach was further confirmed through applications in environmental monitoring using seawater spectral data, character trajectory analysis, and population data underscoring its cross-domain versatility. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

18 pages, 1671 KB  
Article
A Multiple-Well Framework for Human Perceptual Decision-Making
by Joseph Fluegemann, Jiaqi Huang, Morgan Lena Rosendahl, Jerome Busemeyer and Jonathan D. Cohen
Entropy 2026, 28(2), 232; https://doi.org/10.3390/e28020232 - 16 Feb 2026
Viewed by 140
Abstract
We present a quantum cognitive model that integrates the influence of cognitive control into human perceptual decision-making. The model employs a multiple-square-well potential, where each well corresponds to a distinct decision outcome. In this framework, well depth encodes signal strength, while well width [...] Read more.
We present a quantum cognitive model that integrates the influence of cognitive control into human perceptual decision-making. The model employs a multiple-square-well potential, where each well corresponds to a distinct decision outcome. In this framework, well depth encodes signal strength, while well width represents the domain generality of the outcome. The probability of particle localization within each well determines the subjective probability, which subsequently drives a standard Markovian evidence accumulation process to predict empirical choice and response times. We validate the model using the classic dot motion two-alternative forced-choice (2AFC) task. The model successfully replicates key empirical findings of the task, such as the correlation between motion coherence and drift rates. Furthermore, we apply the model to the Yerkes–Dodson law, capturing the approximate inverted U-shaped relationship between task accuracy and cognitive arousal. We compare two theoretical approaches to modeling arousal (1) as eigenenergy values and (2) as kinetic energy terms, contrasting their qualitative predictions regarding the Yerkes–Dodson law. Our work provides the first quantitative model of arousal’s influence on human perceptual decision-making and establishes a foundation for determining the exact functional form of the Yerkes–Dodson law. Full article
(This article belongs to the Special Issue Probability Theory and Quantum Information)
Show Figures

Figure 1

47 pages, 2988 KB  
Article
Further Computations of Quantum Fluid Triplet Structures at Equilibrium in the Diffraction Regime
by Luis M. Sesé
Entropy 2026, 28(2), 231; https://doi.org/10.3390/e28020231 - 16 Feb 2026
Viewed by 92
Abstract
Path integral Monte Carlo simulations and closure computations of quantum fluid triplet structures in the diffraction regime are presented. The principal aim is to shed some more light on the long-standing problem of quantum fluid triplet structures. This topic can be tackled via [...] Read more.
Path integral Monte Carlo simulations and closure computations of quantum fluid triplet structures in the diffraction regime are presented. The principal aim is to shed some more light on the long-standing problem of quantum fluid triplet structures. This topic can be tackled via path integrals in an exact, though computationally demanding, way. The traditional approximate frameworks provided by triplet closures are complementary sources of information that (unexpectedly) may produce, at a much lower cost, useful results. To explore this topic further, the systems selected in this work are helium-3 under supercritical conditions and the quantum hard-sphere fluid on its crystallization line. The fourth-order propagator in the Jang-Jang-Voth’s form (for helium-3) and Cao–Berne’s pair action (for hard spheres) are employed in the corresponding path integral simulations; helium-3 interactions are described with Janzen–Aziz’s pair potential. The closures used are Kirkwood superposition, Jackson–Feenberg convolution, the intermediate AV3, and the symmetrized form of Denton–Ashcroft approximation. The centroid and instantaneous triplet structures, in the real and the Fourier spaces, are investigated by focusing on salient equilateral and isosceles features. To accomplish this goal, additional simulations and closure calculations at the structural pair level are also carried out. The basic theoretical and technical points are described in some detail, the obtained results complete the structural properties reported by this author elsewhere for the abovementioned systems, and a meaningful comparison between the path integral and the closure results is made. In particular, the results illustrate the very slow convergence of the path integral triplet calculations and the behaviors of certain salient Fourier components, such as the double-zero momentum transfers or the equilateral maxima, which may be associated with distinct fluid conditions (e.g., far and near quantum freezing). Closures are shown to yield valuable triplet information over a wide range of conditions, as ascertained from the analyzed centroid structures, which mimic those of fluids at densities higher than the actual ones; thus, closures should remain a part of quantum fluid triplet studies. Full article
(This article belongs to the Section Quantum Information)
25 pages, 2523 KB  
Article
Link Prediction in Heterogeneous Information Networks: Improved Hypergraph Convolution with Adaptive Soft Voting
by Sheng Zhang, Yuyuan Huang, Ziqiang Luo, Jiangnan Zhou, Bing Wu, Ka Sun and Hongmei Mao
Entropy 2026, 28(2), 230; https://doi.org/10.3390/e28020230 - 16 Feb 2026
Viewed by 193
Abstract
Complex real-world systems are often modeled as heterogeneous information networks with diverse node and relation types, bringing new opportunities and challenges to link prediction. Traditional methods based on similarity or meta-paths fail to fully capture high-order structures and semantics, while existing hypergraph-based models [...] Read more.
Complex real-world systems are often modeled as heterogeneous information networks with diverse node and relation types, bringing new opportunities and challenges to link prediction. Traditional methods based on similarity or meta-paths fail to fully capture high-order structures and semantics, while existing hypergraph-based models homogenize all high-order information without considering their importance differences, diluting core associations with redundant noise and limiting prediction accuracy. Given these issues, we propose the VE-HGCN, a link prediction model for HINs that fuses hypergraph convolution with soft-voting ensemble strategy. The model first constructs multiple heterogeneous hypergraphs from HINs via network frequent subgraph pattern extraction, then leverages hypergraph convolution for node representation learning, and finally employs a soft-voting ensemble strategy to fuse multi-model prediction results. Extensive experiments on four public HIN datasets show that the VE-HGCN outperforms seven mainstream baseline models, thereby validating the effectiveness of the proposed method. This study offers a new perspective for link prediction in HINs and exhibits good generality and practicality, providing a feasible reference for addressing high-order information utilization issues in complex heterogeneous network analysis. Full article
Show Figures

Figure 1

18 pages, 457 KB  
Article
Prototype-Based Classifiers and Vector Quantization on a Quantum Computer—Implementing Integer Arithmetic Oracles for Nearest Prototype Search
by Alexander Engelsberger, Magdalena Pšeničkova and Thomas Villmann
Entropy 2026, 28(2), 229; https://doi.org/10.3390/e28020229 - 16 Feb 2026
Viewed by 106
Abstract
The superposition principle in quantum mechanics enables the encoding of an entire solution space within a single quantum state. By employing quantum routines such as amplitude amplification or the Quantum Approximate Optimization Algorithm (QAOA), this solution space can be explored in a computationally [...] Read more.
The superposition principle in quantum mechanics enables the encoding of an entire solution space within a single quantum state. By employing quantum routines such as amplitude amplification or the Quantum Approximate Optimization Algorithm (QAOA), this solution space can be explored in a computationally efficient manner to identify optimal or near-optimal solutions. In this article, we propose quantum circuits that operate on binary data representations to address a central task in prototype-based classification and representation learning, namely the so-called winner determination, which realizes the nearest prototype principle. We investigate quantum search algorithms to identify the closest prototype during prediction, as well as quantum optimization schemes for prototype selection in the training phase. For these algorithms, we design oracles based on arithmetic circuits that leverage quantum parallelism to apply mathematical operations simultaneously to multiple inputs. Furthermore, we introduce an oracle for prototype selection, integrated into a learning routine, which obviates the need for formulating the task as a binary optimization problem and thereby reduces the number of required auxiliary variables. All proposed oracles are implemented using the Python 3-based quantum machine learning framework PennyLane and empirically validated on synthetic benchmark datasets. Full article
(This article belongs to the Special Issue The Future of Quantum Machine Learning and Quantum AI, 2nd Edition)
Show Figures

Figure 1

18 pages, 367 KB  
Article
A Generalization of the DMC
by Sergey Tridenski and Anelia Somekh-Baruch
Entropy 2026, 28(2), 228; https://doi.org/10.3390/e28020228 - 16 Feb 2026
Viewed by 82
Abstract
We consider a generalization of the discrete memoryless channel, in which the channel probability distribution is replaced by a uniform distribution over clouds of channel output sequences. For a random ensemble of such channels, we derive an achievable error exponent, as well as [...] Read more.
We consider a generalization of the discrete memoryless channel, in which the channel probability distribution is replaced by a uniform distribution over clouds of channel output sequences. For a random ensemble of such channels, we derive an achievable error exponent, as well as its converse together with the optimal correct-decoding exponent, all as functions of information rate. As a corollary of these results, we obtain the channel ensemble capacity. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

19 pages, 1431 KB  
Article
Robust Trajectory Prediction for Mobile Robots via Minimum Error Entropy Criterion and Adaptive LSTM Networks
by Da Xie, Zengxun Li, Chun Zhang, Chunyang Wang and Xuyang Wei
Entropy 2026, 28(2), 227; https://doi.org/10.3390/e28020227 - 15 Feb 2026
Viewed by 119
Abstract
Trajectory prediction is critical for safe robot navigation, yet standard deep learning models predominantly rely on the Mean Squared Error (MSE) criterion. While effective under ideal conditions, MSE-based optimization is inherently fragile to non-Gaussian impulsive noise—such as sensor glitches and occlusions—common in real-world [...] Read more.
Trajectory prediction is critical for safe robot navigation, yet standard deep learning models predominantly rely on the Mean Squared Error (MSE) criterion. While effective under ideal conditions, MSE-based optimization is inherently fragile to non-Gaussian impulsive noise—such as sensor glitches and occlusions—common in real-world deployment. To address this limitation, this paper proposes MEE-LSTM, a robust forecasting framework that integrates Long Short-Term Memory networks with the Minimum Error Entropy (MEE) criterion. By minimizing Renyi’s quadratic entropy of the prediction error, our loss function introduces an intrinsic “gradient clipping” mechanism that effectively suppresses the influence of outliers. Furthermore, to overcome the convergence challenges of fixed-kernel information theoretic learning, we introduce a Silverman-based Adaptive Annealing (SAA) strategy that dynamically regulates the kernel bandwidth. Extensive evaluations on the ETH and UCY datasets demonstrate that MEE-LSTM maintains competitive accuracy on clean benchmarks while exhibiting superior resilience in degraded sensing environments. Notably, we identify a “Scissor Plot” phenomenon under stress testing: in the presence of 20% impulsive noise, the proposed model maintains a stable Average Displacement Error (ADE “≈” 0.51 m), whereas MSE baselines suffer catastrophic degradation (ADE > 2.1 m), representing a 75.7% improvement in robustness. This work provides a statistically grounded paradigm for reliable causal inference in hostile robotic perception. Full article
(This article belongs to the Special Issue Bayesian Networks and Causal Discovery)
18 pages, 315 KB  
Article
Simplicity and Complexity in Combinatorial Optimization
by Kamal Dingle and Marcus Hutter
Entropy 2026, 28(2), 226; https://doi.org/10.3390/e28020226 - 15 Feb 2026
Viewed by 440
Abstract
Many problems in physics and computer science can be framed in terms of combinatorial optimization. Due to this, it is interesting and important to study theoretical aspects of such optimization. Here, we study connections between Kolmogorov complexity, optima, and optimization. We argue that [...] Read more.
Many problems in physics and computer science can be framed in terms of combinatorial optimization. Due to this, it is interesting and important to study theoretical aspects of such optimization. Here, we study connections between Kolmogorov complexity, optima, and optimization. We argue that (1) optima and complexity are connected, with extrema being more likely to have low complexity (under certain circumstances); (2) optimization by sampling candidate solutions according to algorithmic probability may be an effective optimization method; and (3) coincidences in extrema to optimization problems are a priori more likely as compared to a purely random null model. Full article
33 pages, 614 KB  
Article
PID Control for Uncertain Systems with Integral Measurements and DoS Attacks Using a Binary Encoding Scheme
by Nan Hou, Yanshuo Wu, Hongyu Gao, Zhongrui Hu and Xianye Bu
Entropy 2026, 28(2), 225; https://doi.org/10.3390/e28020225 - 15 Feb 2026
Viewed by 138
Abstract
In this paper, an observer-based proportional-integral-derivative (PID) controller is designed for a class of uncertain nonlinear systems with integral measurements, denial-of-service (DoS) attacks and bounded stochastic noises under a binary encoding scheme (BES). Parameter uncertainty is involved with a norm-bounded multiplicative expression. Integral [...] Read more.
In this paper, an observer-based proportional-integral-derivative (PID) controller is designed for a class of uncertain nonlinear systems with integral measurements, denial-of-service (DoS) attacks and bounded stochastic noises under a binary encoding scheme (BES). Parameter uncertainty is involved with a norm-bounded multiplicative expression. Integral measurements are considered to reflect the delayed signal collection of sensor. For communication, BES is put into use in the signal transmission process from the sensor to the observer and from the controller to the actuator. Random bit flipping is described that may take place caused by channel noises, whose impact is described by a stochastic noise. Randomly occurring DoS attacks are taken account of that may appear due to the shared network, which block the transmitted signals totally. Three sets of Bernoulli-distributed random variables are adopted to reveal the random occurrence of uncertainties, bit flipping and DoS attacks. The aim of this paper is to design an observer-based PID controller which guarantees that the closed-loop system reaches exponential ultimate boundedness in mean square (EUBMS). By virtue of Lyapunov stability theory, stochastic analysis technique and matrix inequality method, a sufficient condition is developed for designing the observer-based PID controller such that the closed-loop system achieves EUBMS performance, and the ultimate upper bound of the controlled output is bounded and such a bound is minimized. The gain matrices of the observer-based controller are acquired explicitly by virtue of solving the solution to an optimized issue with several matrix inequality constraints. Two simulation examples are given which indicate the usefulness of the proposed control method in this paper adequately. Full article
(This article belongs to the Special Issue Information Theory in Control Systems, 3rd Edition)
16 pages, 13649 KB  
Article
Mapping Heterogeneity in Psychological Risk Among University Students Using Explainable Machine Learning
by Penglin Liu, Ji Tang, Hongxiao Wang and Dingsen Zhang
Entropy 2026, 28(2), 224; https://doi.org/10.3390/e28020224 - 14 Feb 2026
Viewed by 137
Abstract
In the post-pandemic era, student mental health challenges have emerged as a critical issue in higher education. However, conventional assessment approaches often treat at-risk populations as a monolithic entity, thereby limiting intervention effectiveness. This study proposes a novel computational framework that integrates explainable [...] Read more.
In the post-pandemic era, student mental health challenges have emerged as a critical issue in higher education. However, conventional assessment approaches often treat at-risk populations as a monolithic entity, thereby limiting intervention effectiveness. This study proposes a novel computational framework that integrates explainable artificial intelligence (XAI) with unsupervised learning to decode the latent heterogeneity of psychological risk mechanisms. We developed a “predict-explain-discover” pipeline leveraging TreeSHAP and Gaussian Mixture Models to identify distinct risk subtypes based on a 2556-dimensional feature space encompassing lexical, linguistic, and affective indicators. Our approach identified three theoretically-grounded subtypes: academically-driven (28.46%), socio-emotional (43.85%), and internal regulatory (27.69%) risks. Sensitivity analysis using top-20 core features further validated the structural stability of these mechanisms, proving that the subtypes are anchored in the model’s primary decision drivers rather than high-dimensional noise. The framework demonstrates how black-box classifiers can be transformed into diagnostic tools, bridging the gap between predictive accuracy and mechanistic understanding. Our findings align with the Research Domain Criteria (RDoC) and establish a foundation for precision interventions targeting specific risk drivers. This work advances computational mental health research through methodological innovations in mechanism-based subtyping and practical strategies for personalized student support. Full article
Show Figures

Figure 1

22 pages, 7987 KB  
Article
RioCC: Efficient and Accurate Class-Level Code Recommendation Based on Deep Code Clone Detection
by Hongcan Gao, Chenkai Guo and Hui Yang
Entropy 2026, 28(2), 223; https://doi.org/10.3390/e28020223 - 14 Feb 2026
Viewed by 154
Abstract
Context: Code recommendation plays an important role in improving programming efficiency and software quality. Existing approaches mainly focus on method- or API-level recommendations, which limits their effectiveness to local code contexts. From a multi-stage recommendation perspective, class-level code recommendation aims to efficiently narrow [...] Read more.
Context: Code recommendation plays an important role in improving programming efficiency and software quality. Existing approaches mainly focus on method- or API-level recommendations, which limits their effectiveness to local code contexts. From a multi-stage recommendation perspective, class-level code recommendation aims to efficiently narrow a large candidate code space while preserving essential structural information. Objective: This paper proposes RioCC, a class-level code recommendation framework that leverages deep forest-based code clone detection to progressively reduce the candidate space and improve recommendation efficiency in large-scale code spaces. Method: RioCC models the recommendation process as a coarse-to-fine candidate reduction procedure. In the coarse-grained stage, a quick search-based filtering module performs rapid candidate screening and initial similarity estimation, effectively pruning irrelevant candidates and narrowing the search space. In the fine-grained stage, a deep forest-based analysis with cascade learning and multi-grained scanning captures context- and structure-aware representations of class-level code fragments, enabling accurate similarity assessment and recommendation. This two-stage design explicitly separates coarse candidate filtering from detailed semantic matching to balance efficiency and accuracy. Results: Experiments on a large-scale dataset containing 192,000 clone pairs from BigCloneBench and a collected code pool show that RioCC consistently outperforms state-of-the-art methods, including CCLearner, Oreo, and RSharer, across four types of code clones, while significantly accelerating the recommendation process with comparable detection accuracy. Conclusions: By explicitly formulating class-level code recommendation as a staged retrieval and refinement problem, RioCC provides an efficient and scalable solution for large-scale code recommendation and demonstrates the practical value of integrating lightweight filtering with deep forest-based learning. Full article
(This article belongs to the Section Multidisciplinary Applications)
5 pages, 195 KB  
Editorial
Modified Gravity: From Black Holes Entropy to Current Cosmology, 4th Edition
by Kazuharu Bamba
Entropy 2026, 28(2), 222; https://doi.org/10.3390/e28020222 - 14 Feb 2026
Viewed by 276
Abstract
Recent cosmological observational data—such as type Ia supernovae (SNe Ia) [...] Full article
Previous Issue
Back to TopTop