You are currently viewing a new version of our website. To view the old version click .

Entropy

Entropy is an international and interdisciplinary peer-reviewed open access journal of entropy and information studies, published monthly online by MDPI.
The International Society for the Study of Information (IS4SI) and Spanish Society of Biomedical Engineering (SEIB) are affiliated with Entropy and their members receive a discount on the article processing charge.
Indexed in PubMed | Quartile Ranking JCR - Q2 (Physics, Multidisciplinary)

All Articles (14,184)

Multi-view learning has recently gained considerable attention in graph representation learning as it enables the fusion of complementary information from multiple views to enhance representation quality. However, most existing studies neglect that irrelevant views may introduce noise and negatively affect representation quality. To address the issue, we propose a novel multi-view representation learning framework called a View Filter-driven graph representation fusion network, named ViFi. Following the “less for better” principle, the framework focuses on filtering informative views while discarding irrelevant ones. Specifically, an entropy-based adaptive view filter was designed to dynamically filter the most informative views by evaluating their feature–topology entropy characteristics, aiming to not only reduce irrelevance among views but also enhance their complementarity. In addition, to promote more effective fusion of informative views, we propose an optimized fusion mechanism that leverages the filtered views to identify the optimal integration strategy using a novel information gain function. Through extensive experiments on classification and clustering tasks, ViFi demonstrates clear performance advantages over existing state-of-the-art approaches.

24 December 2025

Dynamic Searchable Encryption (DSE) is essential for enabling confidential search operations over encrypted data in cloud computing. However, all existing single-server DSE schemes are vulnerable to Keyword Pair Result Pattern (KPRP) leakage and fail to simultaneously achieve forward and backward security. To address these challenges, this paper proposes a conjunctive keyword DSE scheme based on a dual-server architecture (DS-CKDSE). By integrating a full binary tree with an Indistinguishable Bloom Filter (IBF), the proposed scheme adopts a secure index: The leaf nodes store the keywords and the associated file identifier, while the information of non-leaf nodes is encoded within the IBF. A random state update mechanism, a dual-state array for each keyword and the timestamp trapdoor designs jointly enable robust forward and backward security while supporting efficient conjunctive queries. The dual-server architecture mitigates KPRP leakage by separating secure index storage from trapdoor verification. The security analysis shows that the new scheme satisfies adaptive security under a defined leakage function. Finally, the performance of the proposed scheme is evaluated through experiments, and the results demonstrate that the new scheme enjoys high efficiency in both update and search operations.

24 December 2025

We investigate optimizing quantum tree search algorithms by employing a nested Grover Algorithm. This approach seeks to enhance results compared to previous Grover-based methods by expanding the tree of partial assignments to a specific depth and conducting a quantum search within the subset of remaining assignments. The study explores the implications and constraints of this approach, providing a foundation for quantum artificial intelligence applications. Instead of utilizing conventional heuristic functions that are incompatible with quantum tree search, we introduce the partial candidate solution, which indicates a node at a specific depth of the tree. By employing such a function, we define the concatenated oracle, which enables us to decompose the quantum tree search using Grover’s algorithm. With a branching factor of 2 and a depth of m, the costs of Grover’s algorithm are O(2m/2). The concatenated oracle allows us to reduce the cost to for m partial candidate solutions.

24 December 2025

This article examines the challenges and opportunities in extracting causal information from text with Large Language Models (LLMs). It first establishes the importance of causality extraction and then explores different views on causality, including common sense ideas informing different data annotation schemes, Aristotle’s Four Causes, and Pearl’s Ladder of Causation. The paper notes the relevance of this conceptual variety for the task. The text reviews datasets and work related to finding causal expressions, both using traditional machine learning methods and LLMs. Although the known limitations of LLMs—hallucinations and lack of common sense—affect the reliability of causal findings, GPT and Gemini models (GPT-5 and Gemini 2.5 Pro and others) show the ability to conduct causality analysis; moreover, they can even apply different perspectives, such as counterfactual and Aristotelian. They are also capable of explaining and critiquing causal analyses: we report an experiment showing that in addition to largely flawless analyses, the newer models exhibit very high agreement of 88–91% on causal relationships between events—much higher than the typically reported inter-annotator agreement of 30–70%. The article concludes with a discussion of the lessons learned about these challenges and questions how LLMs might help address them in the future. For example, LLMs should help address the sparsity of annotated data. Moreover, LLMs point to a future where causality analysis in texts focuses not on annotations but on understanding, as causality is about semantics and not word spans. The Appendices and shared data show examples of LLM outputs on tasks involving causal reasoning and causal information extraction, demonstrating the models’ current abilities and limits.

24 December 2025

News & Conferences

Issues

Open for Submission

Editor's Choice

Reprints of Collections

Semantic Information Theory
Reprint

Semantic Information Theory

Editors: Meixia Tao, Kai Niu, Youlong Wu
The Entropy Production—as Cornerstone in Applied Nonequilibrium Thermodynamics
Reprint

The Entropy Production—as Cornerstone in Applied Nonequilibrium Thermodynamics

Dedicated to Professor Signe Kjelstrup on the Occasion of Her 75th Birthday
Editors: Dick Bedeaux, Fernando Bresme, Alex Hansen

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
Entropy - ISSN 1099-4300