Lossy P-LDPC Codes for Compressing General Sources Using Neural Networks
Abstract
:1. Introduction
2. NN-LP-LDPC System
2.1. Transformation Module
2.2. Quantization
Algorithm 1 RMD algorithm. |
Input: |
Output: |
1: |
2: |
3: |
4: |
5: map |
6: |
7: coe |
8: for i in do |
9: if then |
10: coe[i] |
11: end if |
12: end for |
13: |
14: fcfcfc |
15: for i in do |
16: fcfcfc |
17: end for |
18: |
19: while do |
20: |
21: fc |
22: if then |
23: |
24: fcfc updatefcfc |
25: else |
26: break |
27: end if |
28: end while |
29: |
30: |
31: |
32: save map |
33: return |
Algorithm 2 . |
Input: |
Output: |
1: |
2: for i in do |
3: |
4: end for |
5: return |
Algorithm 3 . |
Input: |
Output: |
1: for i in do |
2: |
3: |
4: |
5: for j in do |
6: |
7: end for |
8: end for |
9: return |
2.3. Decoder
3. System Optimization and Technical Details
3.1. Gradient Backpropagation
3.2. Training the Network
4. Simulation Results and Discussions
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Gallager, R. Low-density parity-check codes. IRE Trans. Inf. Theory 1962, 8, 21–28. [Google Scholar] [CrossRef] [Green Version]
- MacKay, D.J. Good error-correcting codes based on very sparse matrices. IEEE Trans. Inf. Theory 1999, 45, 399–431. [Google Scholar] [CrossRef] [Green Version]
- Thorpe, J. Low-density parity-check (LDPC) codes constructed from protographs. IPN Prog. Rep. 2003, 42, 42–154. [Google Scholar]
- Liva, G.; Chiani, M. Protograph LDPC Codes Design Based on EXIT Analysis. In Proceedings of the IEEE GLOBECOM 2007—IEEE Global Telecommunications Conference, Washington, DC, USA, 26–30 November 2007; pp. 3250–3254. [Google Scholar]
- Gupta, A.; Verdú, S. Operational duality between lossy compression and channel coding. IEEE Trans. Inf. Theory 2011, 57, 3171–3179. [Google Scholar] [CrossRef]
- Wainwright, M.J.; Maneva, E.; Martinian, E. Lossy Source Compression Using Low-Density Generator Matrix Codes: Analysis and Algorithms. IEEE Trans. Inf. Theory 2010, 56, 1351–1368. [Google Scholar] [CrossRef] [Green Version]
- Liveris, A.; Xiong, Z.; Georghiades, C. Compression of binary sources with side information at the decoder using LDPC codes. IEEE Commun. Lett. 2002, 6, 440–442. [Google Scholar] [CrossRef]
- Matsunaga, Y.; Yamamoto, H. A coding theorem for lossy data compression by LDPC codes. IEEE Trans. Inf. Theory 2003, 49, 2225–2229. [Google Scholar] [CrossRef] [Green Version]
- Braunstein, A.; Kayhan, F.; Zecchina, R. Efficient LDPC codes over GF (q) for lossy data compression. In Proceedings of the 2009 IEEE International Symposium on Information Theory, Seoul, Republic of Korea, 28 June–3 July 2009; pp. 1978–1982. [Google Scholar]
- Fang, Y. LDPC-Based Lossless Compression of Nonstationary Binary Sources Using Sliding-Window Belief Propagation. IEEE Trans. Commun. 2012, 60, 3161–3166. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, L.; Wu, H.; Liu, S. Performance of lossy P-LDPC codes over GF (2). In Proceedings of the 2020 IEEE 14th International Conference on Signal Processing and Communication Systems (ICSPCS), Adelaide, SA, Australia, 14–16 December 2020; pp. 1–5. [Google Scholar]
- Wang, R.; Liu, S.; Wu, H.; Wang, L. The Efficient Design of Lossy P-LDPC Codes over AWGN Channels. Electronics 2022, 11, 3337. [Google Scholar] [CrossRef]
- Deng, H.; Song, D.; Miao, M.; Wang, L. Design of Lossy Compression of the Gaussian Source with Protograph LDPC Codes. In Proceedings of the 2021 IEEE 15th International Conference on Signal Processing and Communication Systems (ICSPCS), Sydney, Australia, 13–15 December 2021; pp. 1–6. [Google Scholar]
- Ballé, J.; Laparra, V.; Simoncelli, E.P. End-to-end optimization of nonlinear transform codes for perceptual quality. In Proceedings of the 2016 IEEE Picture Coding Symposium (PCS), Nuremberg, Germany, 4–7 December 2016; pp. 1–5. [Google Scholar]
- Toderici, G.; Vincent, D.; Johnston, N.; Jin Hwang, S.; Minnen, D.; Shor, J.; Covell, M. Full resolution image compression with recurrent neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5306–5314. [Google Scholar]
- Zhang, Z.T.; Yeh, C.H.; Kang, L.W.; Lin, M.H. Efficient CTU-based intra frame coding for HEVC based on deep learning. In Proceedings of the 2017 IEEE Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Kuala Lumpur, Malaysia, 12–15 December 2017; pp. 661–664. [Google Scholar]
- Theis, L.; Shi, W.; Cunningham, A.; Huszár, F. Lossy image compression with compressive autoencoders. arXiv 2017, arXiv:1703.00395. [Google Scholar]
- Choi, Y.; El-Khamy, M.; Lee, J. Variable Rate Deep Image Compression With a Conditional Autoencoder. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Xie, Y.; Cheng, K.L.; Chen, Q. Enhanced invertible encoding for learned image compression. In Proceedings of the 29th ACM International Conference on Multimedia, Virtual Event China, 20–24 October 2021; pp. 162–170. [Google Scholar]
- Yang, R.; Mandt, S. Lossy image compression with conditional diffusion models. arXiv 2022, arXiv:2209.06950. [Google Scholar]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
- Dewantara, D.S.; Budi, I.; Ibrohim, M.O. 3218IR at SemEval-2020 Task 11: Conv1D and word embedding in propaganda span identification at news articles. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, Barcelona, Spain (Online), 12 December 2020; pp. 1716–1721. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Werbos, P. Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. Ph.D. Thesis, Harvard University, Cambridge, MA, USA, 1974. [Google Scholar]
- Agustsson, E.; Theis, L. Universally quantized neural compression. Adv. Neural Inf. Process. Syst. 2020, 33, 12367–12376. [Google Scholar]
- Thomas, M.; Joy, A.T. Elements of Information Theory; Wiley-Interscience: Hoboken, NJ, USA, 2006; pp. 463–508. [Google Scholar]
- Hu, X.Y.; Eleftheriou, E.; Arnold, D.M. Regular and irregular progressive edge-growth tanner graphs. IEEE Trans. Inf. Theory 2005, 51, 386–398. [Google Scholar] [CrossRef]
- Divsalar, D.; Dolinar, S.; Jones, C.R.; Andrews, K. Capacity-approaching protograph codes. IEEE J. Sel. Areas Commun. 2009, 27, 876–888. [Google Scholar] [CrossRef]
- Rajpoot, N.M. Simulation of the Rate-Distortion Behaviour of a Memoryless Laplacian Source. In Proceedings of the 4th Middle Eastern Symposium on Simulation and Modelling (MESM 2002), Sharjah, United Arab Emirates, 28–30 October 2002. [Google Scholar]
Literature | Main Contribution |
---|---|
Braunstein [9] | Lossy compression of binary sources using reinforced belief propagation decoding algorithm of LDPC |
Fang [10] | Lossy compression of binary source using sliding-window BP decoding algorithm of LDPC |
Liu [11] | Use P-LDPC code for binary source compression |
Wang [12] | Performance of binary source lossy compression using P-LDPC in AWGN channel |
Deng [13] | Use P-LDPC code for Gaussian source compression |
Proposed scheme | Designed the RMD algorithm, combining the neural network with P-LDPC, and realized the lossy compression of general information sources |
Literature | LDPC Type | Method | Sources |
---|---|---|---|
Braunstein [9] | LDPC | RBP | Binary source |
Fang [10] | LDPC | sliding-window BP | Binary source |
Liu [11], Wang [12] | P-LDPC | RBP | Bianry source |
Deng [13] | P-LDPC | MLC and RBP | Gaussian source |
Proposed scheme | P-LDPC | Tranformation and RMD | General source |
Loss | Learning Rate | Optimizer | Batch Size | Number of Epochs |
---|---|---|---|---|
MSE | Adam | 1024 | 1000 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ren, J.; Song, D.; Wu, H.; Wang, L. Lossy P-LDPC Codes for Compressing General Sources Using Neural Networks. Entropy 2023, 25, 252. https://doi.org/10.3390/e25020252
Ren J, Song D, Wu H, Wang L. Lossy P-LDPC Codes for Compressing General Sources Using Neural Networks. Entropy. 2023; 25(2):252. https://doi.org/10.3390/e25020252
Chicago/Turabian StyleRen, Jinkai, Dan Song, Huihui Wu, and Lin Wang. 2023. "Lossy P-LDPC Codes for Compressing General Sources Using Neural Networks" Entropy 25, no. 2: 252. https://doi.org/10.3390/e25020252
APA StyleRen, J., Song, D., Wu, H., & Wang, L. (2023). Lossy P-LDPC Codes for Compressing General Sources Using Neural Networks. Entropy, 25(2), 252. https://doi.org/10.3390/e25020252