Author Contributions
Conceptualization and supervision, Z.P.; data curation and writing—original draft preparation, B.D.; software and validation, M.S.; writing—review and editing, V.D. All authors have read and agreed to the published version of the manuscript.
Funding
This work has been supported by the Ministry of Education, Science and Technological Development of the Republic of Serbia and by the Science Found of the Republic of Serbia (Grant No. 6527104, AI-Com-in-AI).
Conflicts of Interest
The authors declare no conflict of interest.
References
- Jayant, N.C.; Noll, P. Digital Coding of Waveforms: Principles and Applications to Speech and Video; Prentice Hall: Englewood Cliffs, NJ, USA, 1984. [Google Scholar]
- Gersho, A.; Gray, R. Vector Quantization and Signal Compression; Kluwer Academic Publishers: New York, NY, USA, 1992. [Google Scholar]
- Chu, W.C. Speech Coding Algorithms: Foundation and Evolution of Standardized Coders; John Wiley & Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
- Peric, Z.; Nikolic, J.; Denic, B.; Despotovic, V. Forward adaptive dual-mode quantizer based on the first-degree spline approximation and embedded G.711 codec. Radioengineering 2019, 28, 729–739. [Google Scholar] [CrossRef]
- Nikolic, J.; Peric, Z. Lloyd-Max’s algorithm implementation in speech coding algorithm based on forward adaptive technique. Informatica 2008, 19, 255–270. [Google Scholar] [CrossRef]
- Prosalentis, E.A.; Tombras, G.S. 2-bit adaptive delta modulation system with improved performance. EURASIP J. Adv. Signal Proc. 2007, 2006, 16286. [Google Scholar] [CrossRef]
- Peric, Z.; Denic, B.; Despotovic, V. Novel two-bit adaptive delta modulation algorithms. Informatica 2019, 30, 117–134. [Google Scholar] [CrossRef]
- Peric, Z.; Denic, B.; Despotovic, V. An efficient two-digit adaptive delta modulation for Laplacian source coding. Int. J. Elect. 2019, 106, 1085–1100. [Google Scholar] [CrossRef]
- Denic, B.; Peric, Z.; Despotovic, V. Three-level delta modulation for Laplacian source coding. Adv. Elect. Comp. Eng. 2017, 17, 95–102. [Google Scholar] [CrossRef]
- Delp, E.J.; Saenz, M.; Salama, P. Block Truncation Coding (BTC). In Handbook of Image and Video Processing; Elsevier Academic Press: San Diego, CA, USA, 2005; pp. 661–670. [Google Scholar]
- Jiang, M.; Yang, H. Secure outsourcing algorithm of BTC feature extraction in cloud computing. IEEE Access 2020, 8, 106958–106967. [Google Scholar] [CrossRef]
- Simic, N.; Peric, Z.; Savic, M. Coding algorithm for grayscale images—Design of piecewise uniform quantizer with Golomb-Rice code and novel analytical model for performance analysis. Informatica 2017, 28, 703–724. [Google Scholar] [CrossRef]
- Savic, M.; Peric, Z.; Dincic, M. Coding algorithm for grayscale images based on piecewise uniform quantizers. Informatica 2012, 23, 125–140. [Google Scholar] [CrossRef]
- Huang, K.; Ni, B.; Yang, X. Efficient quantization for neural networks with binary weights and low bitwidth activations. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19), Honolulu, HI, USA, 27 January–1 February 2019. [Google Scholar]
- Nia, V.P.; Belbahr, M. Binary quantizer. J. Comput. Vis. Imaging Syst. 2018, 4, 3. [Google Scholar]
- Qina, H.; Gonga, R.; Liu, X.; Baie, X.; Songc, J.; Sebed, N. Binary neural networks: A survey. arXiv 2020, arXiv:2004.03333. [Google Scholar] [CrossRef]
- Pouransari, H.; Tu, Z.; Tuzel, O. Least squares binary quantization of neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14−19 June 2020. [Google Scholar]
- Courbariaux, M.; Hubara, I.; Soudry, D.; El-Yaniv, R.; Bengio, Y. Binarized neural networks: Training neural networks with weights and activations constrained to +1 or −1. arXiv 2016, arXiv:1602.02830. [Google Scholar]
- Hubara, I.; Courbariaux, M.; Soudry, D.; El-Yaniv, R.; Bengio, Y. Binarized neural networks. In Proceedings of the 30th Conference on Neural Information Processing Systems (NIPS), Barcelona, Spain, 5−10 December 2016. [Google Scholar]
- Simons, T.; Lee, D.J. A review of binarized neural networks. Electronics 2019, 8, 661. [Google Scholar] [CrossRef]
- Darabi, S.; Belbahri, M.; Courbariaux, M.; Nia, V.P. Regularized binary network training. arXiv 2018, arXiv:1812.11800. [Google Scholar]
- Gazor, S.; Zhang, W. Speech probability distribution. IEEE Signal Proc. Lett. 2003, 10, 204–207. [Google Scholar] [CrossRef]
- Banner, R.; Nahshan, Y.; Hoffer, E.; Soudry, D. ACIQ: Analytical clipping for integer quantization of neural networks. arXiv 2018, arXiv:1810.05723. [Google Scholar]
- Banner, R.; Nahshan, Y.; Soudry, D. Post training 4-bit quantization of convolutional networks for rapid-deployment. In Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS), Vancouver, BC, Canada, 8−10 December 2019. [Google Scholar]
- Zrilic, D.G. Circuits and Systems Based on Delta Modulation; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
- Gibson, J.D. Speech compression. Information 2016, 7, 32. [Google Scholar] [CrossRef]
- Jacob, B.; Kligys, S.; Chen, B.; Zhu, M.; Tang, M.; Howard, A.; Adam, H.; Kalenichenko, D. Quantization and training of neural networks for efficient integer-arithmetic-onlyinference. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18−23 June 2018. [Google Scholar]
- Gong, J.; Shen, H.; Zhang, G.; Liu, X.; Li, S.; Jin, G.; Maheshwari, N.; Fomenko, E.; Segal, E. Highly efficient 8-bit low precision inference of convolutional neural networks with IntelCaffe. arXiv 2018, arXiv:1805.08691. [Google Scholar]
- Krishnamoorthi, R. Quantizing deep convolutional networks for efficient inference: Awhitepaper. arXiv 2018, arXiv:1806.08342. [Google Scholar]
- McKinstry, J.L.; Esser, S.K.; Appuswamy, R.; Bablani, D.; Arthur, J.V.; Yildiz, I.B.; Modha, D.S. Discovering low-precision networks close to full-precision networks for efficient embedded inference. arXiv 2018, arXiv:1809.04191. [Google Scholar]
- Choi, J.; Wang, Z.; Venkataramani, S.; Chuang, P.I.; Srinivasan, V.; Gopalakrishnan., K. Pact: Parameterized clipping activation for quantized neural networks. arXiv 2018, arXiv:1805.06085. [Google Scholar]
- Ullah, I.; Manzo, M.; Shah, M.; Madden, M. Graph Convolutional Networks: Analysis, improvements and results. arXiv 2019, arXiv:1912.09592. [Google Scholar]
- Hubara, I.; Courbariaux, M.; Soudry, D.; El-Yaniv, R.; Bengio, Y. Quantized neural networks: Training neural networks with low precision weights and activations. J. Mach. Learn. Res. 2018, 18, 1–30. [Google Scholar]
- Tkachenko, R.; Izonin, I.; Kryvinska, N.; Dronyuk, I.; Zub, K. An approach towards increasing prediction accuracy for the recovery of missing IoT data based on the GRNN-SGTM ensemble. Sensors 2020, 20, 2625. [Google Scholar] [CrossRef] [PubMed]
- Tkachenko, R.; Izonin, I. Model and principles for the implementation of neural-like structures based on geometric data transformations. In Proceedings of the International Conference on Computer Science (ICCSEEA 2018) AISC Series; Springer: Cham, Switzerland, 2019; Volume 754, pp. 578–587. [Google Scholar]
- Na, S. Asymptotic formulas for mismatched fixed-rate minimum MSE Laplacian quantizers. IEEE Signal Proc. Lett. 2008, 15, 13–16. [Google Scholar]
- Demonte, P.; HARVARD Speech Corpus—Audio Recording 2019. University of Salford Collection. 2019. Available online: https://doi.org/10.17866/rd.salford.c.4437578.v1 (accessed on 1 September 2020).
- The USC-SIPI Image Database. Available online: http://sipi.usc.edu/database (accessed on 1 September 2020).
- Lecun, Y.; Cortez, C.; Burges, C. The MNIST Handwritten Digit Database. Available online: yann.lecun.com (accessed on 1 September 2020).
Figure 1.
Illustration of binary quantizer type 1.
Figure 2.
Mean-squared error (MSE) distortion dependence on parameter x_{clip} (Δ) for binary quantizer type 1.
Figure 3.
Illustration of binary quantizer type 2.
Figure 4.
MSE distortion dependence on parameter x_{clip} (Δ) for binary quantizer type 2.
Figure 5.
Performance evaluation of binary quantizers type 1 and 2 for a given x_{clip}.
Figure 6.
Signal-to-quantization noise ratio (SQNR) as a function of input signal variance for binary quantizer type 1.
Figure 7.
SQNR as a function of input signal variance for binary quantizer type 2.
Figure 8.
Pulse code modulation algorithm with a binary quantizer.
Figure 9.
SQNR of the forward adaptive binary quantizer (y_{2} = $1/\sqrt{2}$) in a wide range of input data variances.
Figure 10.
SQNR across speech frames in the case of PCM.
Figure 11.
Adaptive delta modulation algorithm with a binary quantizer.
Figure 12.
SQNR across speech frames in the case of adaptive delta modulation (ADM).
Figure 13.
The reconstructed image for block truncation coding (BTC) (m = 4, r_{av} = 8 bits/block, and r_{σ} = 8 bits/block) with (a) an optimal binary quantizer (y_{2} = $1/\sqrt{2}$) and (b) a non-optimal binary quantizer (y_{2} = 3).
Figure 14.
Learning curves for the considered multiplayer perceptron (MLP) neural network.
Figure 15.
Distribution of learned weights for the considered MLP neural network.
Table 1.
Block truncation coding algorithm with the optimal binary quantizer (y_{2} = $1/\sqrt{2}$ ) applied to the monochrome image of Lena.
| | Block | | | Block | |
---|
4 × 4 | 8 × 8 |
---|
r_{σ} | 8 | 5 | 4 | 8 | 5 | 4 |
r_{sr} | 8 | 5 | 4 | 8 | 5 | 4 |
PSQNR (dB) | 32.06 | 31.84 | 31.37 | 28.59 | 28.47 | 28.18 |
R (bpp) | 2 | 1.5 | 1.625 | 1.25 | 1.16 | 1.125 |
Table 2.
Performance comparison of the block truncation coding algorithm using optimal and non-optimal binary quantizers, applied to the monochrome image of Lena.
Block | | 4 × 4 | |
---|
r_{σ} | 8 | 8 | 8 |
r_{sr} | 8 | 8 | 8 |
y_{2} | $1/\sqrt{2}$ | 1.5 | 3 |
PSQNR (dB) | 32.06 | 28.69 | 20.87 |
R (bpp) | 2 | 2 | 2 |
Table 3.
Prediction accuracy of the multilayer perceptron neural network for different representation levels of the binary quantizer.
| | Y_{2} | | Full Precision |
---|
$1/\sqrt{2}$ | x_{max}/2 (Type 1) | x_{max} (Type 2) |
---|
Accuracy (%) | 91.28 | 81.66 | 89.96 | 96.70 |
SQNR (dB) | 4.287 | 1.636 | 3.205 | - |
| Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).