# Self-Adaptive Approximate Mobile Deep Learning

^{*}

## Abstract

**:**

## 1. Introduction

- We investigate the inference accuracy and the energy saving potential as well as the practicality of two orthogonal neural network compression techniques (quantization and slimming) in the context of mobile human activity recognition;
- We devise dynamic neural network compression adaptation algorithms that achieve a comparable inference accuracy with up to 33% fewer network parameters compared to the best-performing static compression;
- We conduct a 21-person experiment and assess the utility of our deep learning adaptation approach on a previously unseen dataset, while also demonstrating the real-world usability of the implementation and the potential of real-world energy savings it brings.

## 2. Related Work

#### 2.1. Deep Learning Model Optimization

#### 2.2. Dynamic Model Compression

## 3. Methodology Preliminaries

#### 3.1. Any-Precision and Slimmable Neural Networks

#### 3.2. Use-Case: Human Activity Recognition

## 4. Dynamic Neural Network Adaptation Algorithms

#### 4.1. Input Difficulty—Properties Impacting Compressed Classifier Performance

#### 4.2. Guiding Dynamic Network Compression with kNN

#### 4.3. Guiding Dynamic Network Compression with Softmax Confidence

#### 4.4. Guiding Dynamic Network Compression with LDA

#### 4.5. Comparative Analysis

## 5. Dynamic DNN Compression on Mobile Devices

#### 5.1. Implementation

#### 5.2. User Study Details

#### 5.3. Experimental Results

#### 5.4. Power Consumption Evaluation

## 6. Discussion

## 7. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E. Deep learning for computer vision: A brief review. Comput. Intell. Neurosci.
**2018**, 2018, 7068349. [Google Scholar] [CrossRef] [PubMed] - Ronao, C.A.; Cho, S.B. Human activity recognition with smartphone sensors using deep learning neural networks. Expert Syst. Appl.
**2016**, 59, 235–244. [Google Scholar] [CrossRef] - Young, T.; Hazarika, D.; Poria, S.; Cambria, E. Recent Trends in Deep Learning Based Natural Language Processing [Review Article]. IEEE Comput. Intell. Mag.
**2018**, 13, 55–75. [Google Scholar] [CrossRef] - CUDA Zone. Available online: https://developer.nvidia.com/cuda-zone (accessed on 23 September 2021).
- NVIDIA cuDNN. Available online: https://developer.nvidia.com/cudnn (accessed on 23 September 2021).
- Chen, J.; Ran, X. Deep Learning With Edge Computing: A Review. Proc. IEEE
**2019**, 107, 1655–1674. [Google Scholar] [CrossRef] - Ba, L.J.; Caruana, R. Do deep nets really need to be deep? In Proceedings of the 27th International Conference on Neural Information Processing Systems—Volume 2; MIT Press: Cambridge, MA, USA, 2014; pp. 2654–2662. [Google Scholar]
- Gou, J.; Yu, B.; Maybank, S.J.; Tao, D. Knowledge distillation: A survey. Int. J. Comput. Vis.
**2021**, 129, 1789–1819. [Google Scholar] [CrossRef] - Teerapittayanon, S.; McDanel, B.; Kung, H.T. Branchynet: Fast inference via early exiting from deep neural networks. In Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 4–8 December 2016; pp. 2464–2469. [Google Scholar]
- Han, S.; Mao, H.; Dally, W.J. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv
**2015**, arXiv:1510.00149. [Google Scholar] - Yu, J.; Yang, L.; Xu, N.; Yang, J.; Huang, T. Slimmable neural networks. arXiv
**2018**, arXiv:1812.08928. [Google Scholar] - Yu, H.; Li, H.; Shi, H.; Huang, T.S.; Hua, G. Any-precision deep neural networks. Eur. J. Artif. Intell.
**2020**, 1, 10–37686. [Google Scholar] - Machidon, O.; Fajfar, T.; Pejovic, V. Implementing Approximate Mobile Computing. In Proceedings of the 2020 Workshop on Approximate Computing Across the Stack (WAX), Online, 17 March 2020; pp. 1–3. [Google Scholar]
- Pineau, J.; Vincent-Lamarre, P.; Sinha, K.; Larivière, V.; Beygelzimer, A.; d’Alché Buc, F.; Fox, E.; Larochelle, H. Improving reproducibility in machine learning research (a report from the neurips 2019 reproducibility program). J. Mach. Learn. Res.
**2021**, 22, 1–20. [Google Scholar] - Hinton, G.; Vinyals, O.; Dean, J. Distilling the knowledge in a neural network. arXiv
**2015**, arXiv:1503.02531. [Google Scholar] - Lane, N.D.; Bhattacharya, S.; Georgiev, P.; Forlivesi, C.; Jiao, L.; Qendro, L.; Kawsar, F. Deepx: A software accelerator for low-power deep learning inference on mobile devices. In Proceedings of the 2016 15th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN), Vienna, Austria, 11–14 April 2016; pp. 1–12. [Google Scholar]
- Niu, W.; Ma, X.; Lin, S.; Wang, S.; Qian, X.; Lin, X.; Wang, Y.; Ren, B. Patdnn: Achieving real-time DNN execution on mobile devices with pattern-based weight pruning. In Proceedings of the Twenty-Fifth International Conference on ACM ASPLOS, Lausanne, Switzerland, 16–20 March 2020. [Google Scholar]
- Lee, S.; Nirjon, S. Fast and scalable in-memory deep multitask learning via neural weight virtualization. In Proceedings of the 18th International Conference on ACM MobiSys, 2020, Toronto, ON, Canada, 15–19 June 2020. [Google Scholar]
- Wu, J.; Leng, C.; Wang, Y.; Hu, Q.; Cheng, J. Quantized convolutional neural networks for mobile devices. In Proceedings of the IEEE Conference on CVPR, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Yao, S.; Zhao, Y.; Zhang, A.; Su, L.; Abdelzaher, T. Deepiot: Compressing deep neural network structures for sensing systems with a compressor-critic framework. In Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems, Delft, The Netherlands, 6–8 November 2017; pp. 1–14. [Google Scholar]
- Yao, S.; Zhao, Y.; Shao, H.; Liu, S.; Liu, D.; Su, L.; Abdelzaher, T. Fastdeepiot: Towards understanding and optimizing neural network execution time on mobile and embedded devices. In Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems, Shenzhen, China, 4–7 November 2018. [Google Scholar]
- Liu, S.; Lin, Y.; Zhou, Z.; Nan, K.; Liu, H.; Du, J. On-demand deep model compression for mobile devices: A usage-driven model selection framework. In Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services ACM MobiSys, Munich, Germany, 10–15 June 2018. [Google Scholar]
- Huynh, L.N.; Lee, Y.; Balan, R.K. Deepmon: Mobile gpu-based deep learning framework for continuous vision applications. In Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services, New York, NY, USA, 19–23 June 2017; pp. 82–95. [Google Scholar]
- Han, S.; Shen, H.; Philipose, M.; Agarwal, S.; Wolman, A.; Krishnamurthy, A. Mcdnn: An approximation-based execution framework for deep stream processing under resource constraints. In Proceedings of the 14th Annual International Conference on Mobile Systems, Applications, and Services ACM MobiSys, Singapore, 26–30 June 2016. [Google Scholar]
- Bolukbasi, T.; Wang, J.; Dekel, O.; Saligrama, V. Adaptive neural networks for efficient inference. In Proceedings of the International Conference on Machine Learning (ICML), Sydney, Australia, 6–11 August 2017. [Google Scholar]
- Plötz, T.; Guan, Y. Deep Learning for Human Activity Recognition in Mobile Computing. Computer
**2018**, 51, 50–59. [Google Scholar] [CrossRef] - Chen, T.; Moreau, T.; Jiang, Z.; Zheng, L.; Yan, E.; Shen, H.; Cowan, M.; Wang, L.; Hu, Y.; Ceze, L.; et al. {TVM}: An automated end-to-end optimizing compiler for deep learning. In Proceedings of the 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18), Carlsbad, CA, USA, 8–10 October 2018; pp. 578–594. [Google Scholar]
- Anguita, D.; Ghio, A.; Oneto, L.; Parra, X.; Reyes-Ortiz, J.L. A public domain dataset for human activity recognition using smartphones. In Proceedings of the 21th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), Bruges, Belgium, 24–26 April 2013; Volume 3, p. 3. [Google Scholar]
- Nutter, M.; Crawford, C.H.; Ortiz, J. Design of Novel Deep Learning Models for Real-time Human Activity Recognition with Mobile Phones. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar]
- Demšar, J.; Curk, T.; Erjavec, A.; Gorup, Č.; Hočevar, T.; Milutinovič, M.; Možina, M.; Polajnar, M.; Toplak, M.; Starič, A.; et al. Orange: Data mining toolbox in Python. J. Mach. Learn. Res.
**2013**, 14, 2349–2353. [Google Scholar] - Guo, C.; Pleiss, G.; Sun, Y.; Weinberger, K.Q. On calibration of modern neural networks. In Proceedings of the International Conference on Machine Learning, PMLR, Sydney, Australia, 6–11 August 2017; pp. 1321–1330. [Google Scholar]
- Gal, Y.; Ghahramani, Z. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of the International Conference on Machine Learning, PMLR, New York, NY, USA, 20–22 June 2016; pp. 1050–1059. [Google Scholar]
- Reyes-Ortiz, J.L.; Oneto, L.; Samà, A.; Parra, X.; Anguita, D. Transition-aware human activity recognition using smartphones. Neurocomputing
**2016**, 171, 754–767. [Google Scholar] [CrossRef][Green Version] - Jabla, R.; Buendía, F.; Khemaja, M.; Faiz, S. Balancing Timing and Accuracy Requirements in Human Activity Recognition Mobile Applications. Proceedings
**2019**, 31, 15. [Google Scholar] [CrossRef][Green Version] - Akoglu, H. User’s guide to correlation coefficients. Turk. J. Emerg. Med.
**2018**, 18, 91–93. [Google Scholar] [CrossRef] [PubMed] - Smerkous, D. Neo.GPIO. Available online: https://github.com/smerkousdavid/Neo.GPIO (accessed on 23 September 2021).
- Robnik-Šikonja, M.; Kononenko, I. Theoretical and empirical analysis of ReliefF and RReliefF. Mach. Learn.
**2003**, 53, 23–69. [Google Scholar] [CrossRef][Green Version] - Monsoon Solutions High Voltage Power Monitor. Available online: http://msoon.github.io/powermonitor/HVPM.html (accessed on 23 September 2021).
- Schuler, A.; Anderst-Kotsis, G. Examining the energy impact of sorting algorithms on Android: An empirical study. In Proceedings of the 16th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, MobiQuitous ’19, Houston, TX, USA, 12–14 November 2019; ACM: New York, NY, USA, 2019; pp. 404–413. [Google Scholar]
- Zhang, J.; Tao, D. Empowering Things With Intelligence: A Survey of the Progress, Challenges, and Opportunities in Artificial Intelligence of Things. IEEE Internet Things J.
**2021**, 8, 7789–7817. [Google Scholar] [CrossRef] - Wu, Z.; Chen, Y.; Zhao, B.; Kang, X.; Ding, Y. Review of Weed Detection Methods Based on Computer Vision. Sensors
**2021**, 21, 3647. [Google Scholar] [CrossRef] [PubMed]

**Figure 1.**Illustration of the two neural network compression techniques used in our experiments. (

**a**) Any-Precision Deep Neural Networks. (

**b**) Slimmable Neural Network.

**Figure 4.**Per-activity accuracy of both networks for various compression levels. (

**a**) Any-Precision deep neural networks. (

**b**) Slimmable Neural Network.

**Figure 5.**Per-user accuracy of both networks for various compression levels. (

**a**) Any-Precision deep neural networks. (

**b**) Slimmable Neural Network.

**Figure 6.**Comparison of the neural network accuracy and its softmax-based confidence for different compression levels on the UCI HAR dataset. (

**a**) Any-Precision. (

**b**) SNN (Slimmable Neural Network).

**Figure 7.**Accuracy of k-NN based compression level selection algorithm. The blue dots show the accuracy that can be achieved with static compression. (

**a**) Any-Precision. (

**b**) SNN (Slimmable Neural Network).

**Figure 9.**Results of the compression level selection algorithm based on softmax confidence. The blue line shows the accuracy that can be achieved with static compression. (

**a**) Any-Precision. (

**b**) SNN.

**Figure 10.**Grouping based on LDA subspace separating inputs by “correctness” of classification. The LDA label for each sample is obtained as the majority vote for correctly vs. incorrectly classified across all the network’s compression levels. (

**a**) Any-Precision. (

**b**) SNN.

**Figure 11.**Results of the compression level selection algorithm based on LDA subspace projections. (

**a**) Any-Precision. (

**b**) SNN.

**Figure 12.**Comparative illustration of the results of the three compression selection algorithms on the UCI HAR dataset, for both Any-Precision ResNet-50 and SNN MobileNet-V2. (

**a**) Any-Precision. (

**b**) SNN.

**Figure 14.**Experimental results obtained for the user study. (

**a**) Accuracy vs. Network width. (

**b**) Accuracy vs. Power consumption.

**Table 1.**MSE and RMSE for the five machine learning models trained to predict the difficulty of each input sample.

ML Model | MSE | RMSE |
---|---|---|

kNN | 0.039 | 0.198 |

SVM | 0.051 | 0.225 |

Random forest | 0.051 | 0.226 |

Linear regression | 0.056 | 0.236 |

Constant regressor | 0.056 | 0.238 |

Nr. | Feature Type | Variables |
---|---|---|

1–4 | Body acceleration—average | X, Y, Z, Magnitude |

5–8 | Body acceleration—standard deviation | X, Y, Z, Magnitude |

9–11 | Body acceleration—correlation | XY, XZ, YZ |

12–15 | Gravity acceleration—average | X, Y, Z, Magnitude |

16–19 | Gravity acceleration—standard deviation | X, Y, Z, Magnitude |

20–22 | Gravity acceleration—correlation | XY, XZ, YZ |

23–26 | Body acceleration jerk—average | X, Y, Z, Magnitude |

27–30 | Body acceleration jerk—standard deviation | X, Y, Z, Magnitude |

31–33 | Body acceleration jerk—correlation | XY, XZ, YZ |

34–37 | Angular velocity—average | X, Y, Z, Magnitude |

38–41 | Angular velocity—standard deviation | X, Y, Z, Magnitude |

42–44 | Angular velocity—correlation | XY, XZ, YZ |

45–48 | Angular velocity jerk—average | X, Y, Z, Magnitude |

49–52 | Angular velocity jerk—standard deviation | X, Y, Z, Magnitude |

53–55 | Angular velocity jerk—correlation | XY, XZ, YZ |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Knez, T.; Machidon, O.; Pejović, V. Self-Adaptive Approximate Mobile Deep Learning. *Electronics* **2021**, *10*, 2958.
https://doi.org/10.3390/electronics10232958

**AMA Style**

Knez T, Machidon O, Pejović V. Self-Adaptive Approximate Mobile Deep Learning. *Electronics*. 2021; 10(23):2958.
https://doi.org/10.3390/electronics10232958

**Chicago/Turabian Style**

Knez, Timotej, Octavian Machidon, and Veljko Pejović. 2021. "Self-Adaptive Approximate Mobile Deep Learning" *Electronics* 10, no. 23: 2958.
https://doi.org/10.3390/electronics10232958