# Pruning- and Quantization-Based Compression Algorithm for Number of Mixed Signals Identification Network

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

- In this paper, we studied a source number estimation network (SNEN) which can realize robust number estimation;
- Redundant connections in convolutional neural networks are removed by weight pruning. The sparse network structure is obtained, and network reasoning speed is accelerated;
- The weights and activation values of convolutional neural networks are quantified. The model is quantized to different bits and compared to obtain the optimal quantized bits;
- A combination of weight pruning and parameter quantization compression method is performed on the network to achieve a lightweight source number estimation network.

## 2. System Model

## 3. Lightweight Source Number Estimation Network Method

#### 3.1. Data Preprocessing

#### 3.2. Source Number Estimation Network

- Perform data reconstruction on the input complex data $x\left(n\right)$, and reconstruct the data into a three-dimensional tensor;
- Input the reconstructed data into each of the two convolutional layers, the two convolutional operations corresponding to the real part output and the imaginary part output in Formula (5);
- Reconstruct the 2-output data in 2 into a complex sequence. Then, input sequence into the next layer.

#### 3.3. Threshold-Based Weight Pruning Algorithm

#### Weighted Pruning

- Train the source number estimation network normally until the network converges so that the network is fully learned;
- Set a threshold $t$ and prune the connections with absolute values of weights less than the threshold $t$, transforming the dense network into a sparse network;
- Fine-tune the resulting sparse network to reduce the loss of accuracy due to pruning.

#### 3.4. Parameter Quantization

- The full precision source number estimation network is trained well, and the scaling coefficients of weight and activation values are calculated according to the algorithm;
- Weights and activation values are quantized according to the scaling factor. Each weight and activation value is multiplied by the scaling factor and rounded;
- Judge the quantization effect. If the precision is not satisfied, the quantization number is selected again.

Algorithm 1. Network pruning and quantization algorithm. | |

Input: $W$, $N$, ${a}_{0}$, ${\theta}_{a}$, $X$, ${S}_{X}$;
| |

Where $W=\{{W}_{1},{W}_{2},\dots ,{W}_{L}\}$ is the weight of the pre-training model; $N$ is the threshold interval; ${a}_{0}$ is the initial accuracy of the model; ${\theta}_{a}$ is the threshold of accuracy; $X$ is the parameter to be quantized; ${S}_{X}$ is the scaling factor. | |

Output: ${W}_{k}^{i}$, $Q\left(X\right)$; | |

Where ${W}_{k}^{i}$ is the weight after training; $Q\left(X\right)$ is the quantized parameter. | |

1: | function Train ($W$, $N$, ${a}_{0}$, ${\theta}_{a}$, $X$, ${S}_{X}$) |

2: | Obtain the maximum ${W}_{max}$ and minimum ${W}_{min}$ of the absolute value of the model parameters; |

3: | The threshold interval ${n}_{0}$ can be obtained from Equation (13); |

4: | The test threshold $V$ can be obtained from Equation (14); |

5: | for ${V}_{n}$ in $V$ do |

6: | for ${W}_{k}^{i}$ in $W$ do |

7: | if $\left|{W}_{k}^{i}\right|<{V}_{n}$ then |

8: | ${T}_{k}=p({W}_{k}^{i})$ |

9: | end if |

10: | end for |

11: | if $\left|a-{a}_{0}\right|>{\theta}_{a}$ then |

12: | return ${W}_{k}^{i}$; |

13: | end if |

14: | end for |

15: | Quantization of weight and activation value: |

$Q(X)=\frac{min(max(\lfloor X\times {S}_{X}\rfloor ,1-{2}^{n-1}),{2}^{n-1}-1)}{{S}_{X}}$ | |

16: | end function |

## 4. Results

#### 4.1. Performance Analysis of Source Number Estimation Network

#### 4.2. Performance Analysis of Network after Pruning

#### 4.3. Performance Analysis of Network after Quantization

#### 4.4. The Combination of Pruning and Quantification

## 5. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Conflicts of Interest

## References

- Akaike, H.T. A New Look at the Statistical Model Identification. IEEE Trans. Autom. Control
**1974**, 19, 716–723. [Google Scholar] [CrossRef] - Karhunen, J.; Cichocki, A.; Kasprzak, W.; Pajunen, P. On Neural Blind Separation with Noise Suppression and Redundancy Reduction. Int. J. Neural Syst.
**1997**, 8, 219–237. [Google Scholar] [CrossRef] [PubMed] - Wu, H.T.; Yang, J.F.; Chen, F.K. Source number estimator using Gerschgorin disks. In Proceedings of the ICASSP’94, IEEE International Conference on Acoustics, Speech and Signal Processing, Adelaide, SA, USA, 19–22 April 1994. [Google Scholar]
- Zhang, Q.T.; Wong, K.M. Information theoretic criteria for the determination of the number of signals in spatially correlated noise. IEEE Trans. Signal Process.
**1993**, 41, 1652–1663. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Stoica, P.; Cedervall, M. Detection tests for array processing in unknown correlated noise fields. IEEE Trans. Signal Process.
**1997**, 45, 2351–2362. [Google Scholar] [CrossRef] - Hu, X. Research and Improvement on the Estimator Using Gerschgorin Disks. Syst. Eng. Theory Pract.
**2007**, 27, 124–131. [Google Scholar] [CrossRef] - Ma, K.L.; Wang, C.C. Source Number Estimation Method Based on Deep Learning. Aerosp. Electron. Warf.
**2019**, 35, 7–11. [Google Scholar] - Yin, B.C.; Wang, W.T.; Wang, L.C. Review of Deep Learning. J. Beijing Univ. Technol.
**2015**, 41, 48–59. [Google Scholar] [CrossRef] - Peng, S.; Jiang, H.; Wang, H.; Alwageed, H.; Yao, Y. Modulation classification using convolutional Neural Network based deep learning model. In Proceedings of the 2017 26th Wireless and Optical Communication Conference (WOCC), Newark, NJ, USA, 7–8 April 2017. [Google Scholar]
- Nachmani, E.; Be’ery, Y.; Burshtein, D. Learning to Decode Linear Codes Using Deep Learning. In Proceedings of the 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 27–30 September 2016. [Google Scholar]
- He, H.; Wen, C.-K.; Jin, S.; Li, G.Y. Deep Learning-based Channel Estimation for Beamspace mmWave Massive MIMO Systems. IEEE Wirel. Commun. Lett.
**2018**, 7, 852–855. [Google Scholar] [CrossRef] [Green Version] - Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst.
**2012**, 25, 1097–1105. [Google Scholar] [CrossRef] [Green Version] - Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv
**2014**, arXiv:1409.1556. [Google Scholar] - Han, S.; Pool, J.; Tran, J.; Dally, W. Learning both weights and connections for efficient neural network. Adv. Neural Inf. Process. Syst.
**2015**, 28, 1135–1143. [Google Scholar] - Lecun, Y.; Denker, J.S.; Solla, S.A. Optimal brain damage. Adv. Neural Inf. Process. Syst.
**1990**, 2, 598–605. [Google Scholar] - Lebedev, V.; Lempitsky, V. Fast convNets using group-wise brain damage. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2554–2564. [Google Scholar]
- Courbariaux, M.; Bengio, Y.; David, J.P. Binary connect: Training deep neural networks with binary weights during propagations. Adv. Neural Inf. Process. Syst.
**2015**, 28, 3123–3131. [Google Scholar] - Rastegari, M.; Ordonez, V.; Redmon, J.; Farhadi, A. Xnor-net: Image-net classification using binary convolutional neural networks. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2016; pp. 525–542. [Google Scholar]
- Li, F.; Zhang, B.; Liu, B. Ternary weight networks. arXiv
**2016**, arXiv:1605.04711. [Google Scholar] - Murmann, B.; Bankman, D.; Chai, E.; Miyashita, D.; Yang, L. Mixed-signal circuits for embedded machine-learning applications. In Proceedings of the 2015 49th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 8–11 November 2015; pp. 1341–1345. [Google Scholar]
- Han, S.; Mao, H.; Dally, W.J. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv
**2015**, arXiv:1510.00149. [Google Scholar] - Zhao, M.; Li, M.; Peng, S.-L.; Li, J. A Novel Deep Learning Model Compression Algorithm. Electronics
**2022**, 11, 1066. [Google Scholar] [CrossRef] - Liang, T.; Glossner, J.; Wang, L.; Shi, S.; Zhang, X. Pruning and quantization for deep neural network acceleration: A survey. Neurocomputing
**2021**, 461, 370–403. [Google Scholar] [CrossRef] - Ghimire, D.; Kil, D.; Kim, S.-h. A Survey on Efficient Convolutional Neural Networks and Hardware Acceleration. Electronics
**2022**, 11, 945. [Google Scholar] [CrossRef]

**Figure 7.**Comparison chart of various source number estimation methods. (

**a**) shows the accuracy of this method compared to traditional estimation methods. (

**b**) shows the accuracy of the present method compared with other CNN.

**Table 1.**Symbols and their brief description/definition in Section 2.

Symbol | Description/Definition |
---|---|

${a}_{ij}$ | unknown mixing coefficient |

${s}_{i}\left(t\right)$ | source signal |

${x}_{j}(t)$ | mixed signal |

$\mathit{n}\left(t\right)$ | $N\times 1$ dimension noise vector |

$\mathit{x}\left(t\right)$ | $N\times M$ dimension mixed matrix |

**Table 2.**Symbols and their brief description/definition in Section 3.

Symbol | Description/Definition |
---|---|

$x\left(n\right)$ | input data |

$w(n)$ | convolution kernel weights |

$\mathrm{Re}\{\xb7\}$ | real part |

$\mathrm{Im}\{\xb7\}$ | imaginary part |

$\theta $ | the set of all trainable parameters |

$P(\xb7)$ | posterior probability |

${y}^{(m)}$ | observed data |

${W}_{k}$ | connections between neurons in the $k$ layer of the neural network |

$L$ | the number of layers of the neural network |

$A(\xb7)$ | the accuracy of the neural network |

$\odot $ | he Hadamard product |

${W}_{k}^{i}$ | the i-th weight of the k-th network layer |

${\xi}_{k}$ | all neurons of the k-th network layer |

${n}_{0}$ | threshold interval |

${V}_{n}$ | the n-th threshold |

$\lambda $ | regular coefficient |

$X$ | the parameter to be quantified |

$\lfloor \xb7\rfloor $ | integer operation |

${S}_{X}$ | the scaling factor of $X$ |

**Table 3.**Weight number, Threshold, and Pruning rate of different layers in source number estimation network.

Layer | Weights Number | Threshold | Pruning Rate |
---|---|---|---|

Conv1 | $1.4\times {10}^{4}$ | 0.07125 | 43.40% |

Conv2 | $24\times {10}^{4}$ | 0.10174 | 68.42% |

Conv3 | $46\times {10}^{4}$ | 0.06571 | 78.49% |

FC1 | $200\times {10}^{4}$ | 0.16179 | 87.18% |

FC2 | $48\times {10}^{4}$ | 0.11726 | 78.98% |

Total | $319\times {10}^{4}$ | - | 83.20% |

Bits (W/A) | Model Size/MB | Average Accuracy | Compression Ratio | Declining Accuracy | $\mathit{\tau}$ |
---|---|---|---|---|---|

32/32 | 12.92 | 96.7% | - | - | - |

16/16 | 6.39 | 96.0% | 50.54% | 0.007 | 72.20 |

8/8 | 4.16 | 95.8% | 67.80% | 0.009 | 75.33 |

4/4 | 3.38 | 91.3% | 73.83% | 0.054 | 13.67 |

Model Size/MB | Average Accuracy | Compression Ratio | Declining Accuracy | |
---|---|---|---|---|

Original Network | 12.92 | 96.7% | - | - |

Pruning | 7.24 | 95.3% | 43.96% | 1.4% |

Quantification | 4.16 | 95.8% | 67.80% | 0.9% |

Pruning + Quantification | 3.78 | 94.4% | 70.74% | 2.3% |

Other Networks | 2.47 | 86.4% | - | - |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Shen, W.; Wang, W.; Zhu, J.; Zhou, H.; Wang, S.
Pruning- and Quantization-Based Compression Algorithm for Number of Mixed Signals Identification Network. *Electronics* **2023**, *12*, 1694.
https://doi.org/10.3390/electronics12071694

**AMA Style**

Shen W, Wang W, Zhu J, Zhou H, Wang S.
Pruning- and Quantization-Based Compression Algorithm for Number of Mixed Signals Identification Network. *Electronics*. 2023; 12(7):1694.
https://doi.org/10.3390/electronics12071694

**Chicago/Turabian Style**

Shen, Weiguo, Wei Wang, Jiawei Zhu, Huaji Zhou, and Shunling Wang.
2023. "Pruning- and Quantization-Based Compression Algorithm for Number of Mixed Signals Identification Network" *Electronics* 12, no. 7: 1694.
https://doi.org/10.3390/electronics12071694