Abstract
Neural networks significantly outperform traditional methods in both decoding amplitude-, phase-, and polarization-encoded data pages and suppressing noise within them. However, the mechanism behind neural networks’ denoising capability remains not fully understood. We discover that zeroing channels can improve the reconstruction effect of the model. Consequently, this paper presents a method to locate the noise feature objectively from γ, the weights of the Batch Normalization (BN) layer. γ stands for the importance of the channel in the model and γ < 1 means the channel may contain noise feature. Through experiments, removing the channels that contained a higher proportion of these noisy features, the reconstructed data pages showed a ~2% improvement in Peak Signal-to-Noise Ratio (PSNR) compared to results obtained by directly outputting data without removing the noisy channels. It indicates that neural networks achieve efficient denoising of encoded data pages by adjusting the weight parameters of BN layers, thereby suppressing or enhancing specific channels.