entropy-logo

Journal Browser

Journal Browser

Information-Theoretic Security and Privacy

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: 20 July 2025 | Viewed by 2783

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA
Interests: differential privacy; information theoretic privacy and secrecy; information networks; federated learning; statistical inference
Department Electrical Engineering, University of North Texas, Denton, TX 76203, USA
Interests: information theory; security and privacy; coding theory; distributed storage and computation; wireless communications

Special Issue Information

Dear Colleagues,

Our current digital landscape has made information security and privacy crucial. In a world where data breaches and cyber threats are growing more sophisticated, safeguarding sensitive information has become paramount for organizations and individuals. This Special Issue delves into the multifaceted challenges and innovative information security and privacy solutions for various information systems, including wireless networks, machine learning, smart grids, and social graphs.

Ensuring that information systems are secure, reliable, and private is essential. Information-theoretic measures offer a powerful framework to address these challenges, providing insights that quantify these essential qualities and rigorously evaluate and guarantee their integrity. This approach illuminates the path to building more trustworthy and resilient information systems across various domains.

We invite previously unpublished contributions at the intersection of information theory, networks, wireless communications, and data privacy, including (but not limited to) the following topics:

  • Theoretical foundations of information-theoretic privacy;
  • Privacy-preserving distributed/federated learning
  • The design of privacy mechanisms for big data analytics (including healthcare and social networks);
  • Performance evaluations of privacy-preserving mechanisms;
  • Privacy-preserving data publishing;
  • Energy-efficient physical-layer security;
  • The integration of differential privacy and physical-layer security;
  • Cryptographic protocols for differential privacy;
  • Private information retrieval;
  • Privacy-preserving distributed computing.

Dr. Mohamed Seif
Dr. Hua Sun
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • differential privacy
  • machine learning
  • federated learning
  • secure aggregation
  • age of information
  • gossip over networks
  • statistical inference over networks
  • information-theoretic privacy and security
  • wireless network
  • social networks and healthcare

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

37 pages, 979 KiB  
Article
Variable-Length Coding with Zero and Non-Zero Privacy Leakage
by Amirreza Zamani and Mikael Skoglund
Entropy 2025, 27(2), 124; https://doi.org/10.3390/e27020124 - 24 Jan 2025
Viewed by 1052
Abstract
A private compression design problem is studied, where an encoder observes useful data Y, wishes to compress them using variable-length code, and communicates them through an unsecured channel. Since Y are correlated with the private attribute X, the encoder uses a [...] Read more.
A private compression design problem is studied, where an encoder observes useful data Y, wishes to compress them using variable-length code, and communicates them through an unsecured channel. Since Y are correlated with the private attribute X, the encoder uses a private compression mechanism to design an encoded message C and sends it over the channel. An adversary is assumed to have access to the output of the encoder, i.e., C, and tries to estimate X. Furthermore, it is assumed that both encoder and decoder have access to a shared secret key W. In this work, the design goal is to encode message C with the minimum possible average length that satisfies certain privacy constraints. We consider two scenarios: 1. zero privacy leakage, i.e., perfect privacy (secrecy); 2. non-zero privacy leakage, i.e., non-perfect privacy constraint. Considering the perfect privacy scenario, we first study two different privacy mechanism design problems and find upper bounds on the entropy of the optimizers by solving a linear program. We use the obtained optimizers to design C. In the two cases, we strengthen the existing bounds: 1. |X||Y|; 2. The realization of (X,Y) follows a specific joint distribution. In particular, considering the second case, we use two-part construction coding to achieve the upper bounds. Furthermore, in a numerical example, we study the obtained bounds and show that they can improve existing results. Finally, we strengthen the obtained bounds using the minimum entropy coupling concept and a greedy entropy-based algorithm. Considering the non-perfect privacy scenario, we find upper and lower bounds on the average length of the encoded message using different privacy metrics and study them in special cases. For achievability, we use two-part construction coding and extended versions of the functional representation lemma. Lastly, in an example, we show that the bounds can be asymptotically tight. Full article
(This article belongs to the Special Issue Information-Theoretic Security and Privacy)
Show Figures

Figure 1

22 pages, 2513 KiB  
Article
CURATE: Scaling-Up Differentially Private Causal Graph Discovery
by Payel Bhattacharjee and Ravi Tandon
Entropy 2024, 26(11), 946; https://doi.org/10.3390/e26110946 - 5 Nov 2024
Viewed by 779
Abstract
Causal graph discovery (CGD) is the process of estimating the underlying probabilistic graphical model that represents the joint distribution of features of a dataset. CGD algorithms are broadly classified into two categories: (i) constraint-based algorithms, where the outcome depends on conditional independence (CI) [...] Read more.
Causal graph discovery (CGD) is the process of estimating the underlying probabilistic graphical model that represents the joint distribution of features of a dataset. CGD algorithms are broadly classified into two categories: (i) constraint-based algorithms, where the outcome depends on conditional independence (CI) tests, and (ii) score-based algorithms, where the outcome depends on optimized score function. Because sensitive features of observational data are prone to privacy leakage, differential privacy (DP) has been adopted to ensure user privacy in CGD. Adding the same amount of noise in this sequential-type estimation process affects the predictive performance of algorithms. Initial CI tests in constraint-based algorithms and later iterations of the optimization process of score-based algorithms are crucial; thus, they need to be more accurate and less noisy. Based on this key observation, we present CURATE (CaUsal gRaph AdapTivE privacy), a DP-CGD framework with adaptive privacy budgeting. In contrast to existing DP-CGD algorithms with uniform privacy budgeting across all iterations, CURATE allows for adaptive privacy budgeting by minimizing error probability (constraint-based), maximizing iterations of the optimization problem (score-based) while keeping the cumulative leakage bounded. To validate our framework, we present a comprehensive set of experiments on several datasets and show that CURATE achieves higher utility compared to existing DP-CGD algorithms with less privacy leakage. Full article
(This article belongs to the Special Issue Information-Theoretic Security and Privacy)
Show Figures

Figure 1

Back to TopTop