Next Article in Journal
SplitML: A Unified Privacy-Preserving Architecture for Federated Split-Learning in Heterogeneous Environments
Previous Article in Journal
A 1.06 ppm/°C Compact CMOS Voltage Reference
error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

A Reinforcement Learning-Based Optimization Strategy for Noise Budget Management in Homomorphically Encrypted Deep Network Inference

1
Faculty of Information Network Security, Yunnan Police College, Kunming 650223, China
2
Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
*
Author to whom correspondence should be addressed.
Electronics 2026, 15(2), 275; https://doi.org/10.3390/electronics15020275
Submission received: 13 November 2025 / Revised: 26 December 2025 / Accepted: 31 December 2025 / Published: 7 January 2026
(This article belongs to the Special Issue Security and Privacy in Artificial Intelligence Systems)

Abstract

Homomorphic encryption provides a powerful cryptographic solution for privacy-preserving deep neural network inference, enabling computation on encrypted data. However, the practical application of homomorphic encryption is fundamentally constrained by the noise budget, a core component of homomorphic encryption schemes. The substantial multiplicative depth of modern deep neural networks rapidly consumes this budget, necessitating frequent, computationally expensive bootstrapping operations to refresh the noise. This bootstrapping process has emerged as the primary performance bottleneck. Current noise management strategies are predominantly static, triggering bootstrapping at pre-defined, fixed intervals. This approach is sub-optimal for deep, complex architectures, leading to excessive computational overhead and potential accuracy degradation due to cumulative precision loss. To address this challenge, we propose a Deep Network-aware Adaptive Noise-budget Management mechanism, a novel mechanism that formulates noise budget allocation as a sequential decision problem optimized via reinforcement learning. The core of the proposed mechanism comprises two components. First, we construct a layer-aware noise consumption prediction model to accurately estimate the heterogeneous computational costs and noise accumulation across different network layers. Second, we design a Deep Q-Network-driven optimization algorithm. This Deep Q-Network agent is trained to derive a globally optimal policy, dynamically determining the optimal timing and network location for executing bootstrapping operations, based on the real-time output of the noise predictor and the current network state. This approach shifts from a static, pre-defined strategy to an adaptive, globally optimized one. Experimental validation on several typical deep neural network architectures demonstrates that the proposed mechanism significantly outperforms state-of-the-art fixed strategies, markedly reducing redundant bootstrapping overhead while maintaining model performance.
Keywords: homomorphic encryption; privacy-preserving inference; noise budget management; reinforcement learning homomorphic encryption; privacy-preserving inference; noise budget management; reinforcement learning

Share and Cite

MDPI and ACS Style

Zhang, C.; Bai, F.; Wan, J.; Chen, Y. A Reinforcement Learning-Based Optimization Strategy for Noise Budget Management in Homomorphically Encrypted Deep Network Inference. Electronics 2026, 15, 275. https://doi.org/10.3390/electronics15020275

AMA Style

Zhang C, Bai F, Wan J, Chen Y. A Reinforcement Learning-Based Optimization Strategy for Noise Budget Management in Homomorphically Encrypted Deep Network Inference. Electronics. 2026; 15(2):275. https://doi.org/10.3390/electronics15020275

Chicago/Turabian Style

Zhang, Chi, Fenhua Bai, Jinhua Wan, and Yu Chen. 2026. "A Reinforcement Learning-Based Optimization Strategy for Noise Budget Management in Homomorphically Encrypted Deep Network Inference" Electronics 15, no. 2: 275. https://doi.org/10.3390/electronics15020275

APA Style

Zhang, C., Bai, F., Wan, J., & Chen, Y. (2026). A Reinforcement Learning-Based Optimization Strategy for Noise Budget Management in Homomorphically Encrypted Deep Network Inference. Electronics, 15(2), 275. https://doi.org/10.3390/electronics15020275

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop