Next Article in Journal / Special Issue
Embedded Object Detection with Custom LittleNet, FINN and Vitis AI DCNN Accelerators
Previous Article in Journal
Big–Little Adaptive Neural Networks on Low-Power Near-Subthreshold Processors
Previous Article in Special Issue
A Framework for Ultra Low-Power Hardware Accelerators Using NNs for Embedded Time Series Classification
Article

Low-Overhead Reinforcement Learning-Based Power Management Using 2QoSM †

1
D-INFK, ETH Zürich, 8092 Zurich, Switzerland
2
School of Electrical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
3
School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in MCSoC, Singapore, 20–23 December 2021.
Academic Editor: Andrea Acquaviva
J. Low Power Electron. Appl. 2022, 12(2), 29; https://doi.org/10.3390/jlpea12020029
Received: 1 February 2022 / Revised: 11 March 2022 / Accepted: 19 March 2022 / Published: 19 May 2022
(This article belongs to the Special Issue Hardware for Machine Learning)
With the computational systems of even embedded devices becoming ever more powerful, there is a need for more effective and pro-active methods of dynamic power management. The work presented in this paper demonstrates the effectiveness of a reinforcement-learning based dynamic power manager placed in a software framework. This combination of Q-learning for determining policy and the software abstractions provide many of the benefits of co-design, namely, good performance, responsiveness and application guidance, with the flexibility of easily changing policies or platforms. The Q-learning based Quality of Service Manager (2QoSM) is implemented on an autonomous robot built on a complex, powerful embedded single-board computer (SBC) and a high-resolution path-planning algorithm. We find that the 2QoSM reduces power consumption up to 42% compared to the Linux on-demand governor and 10.2% over a state-of-the-art situation aware governor. Moreover, the performance as measured by path error is improved by up to 6.1%, all while saving power. View Full-Text
Keywords: middleware; power management; dynamic power management; reinforcement learning; dvfs; voltage scaling; machine learning middleware; power management; dynamic power management; reinforcement learning; dvfs; voltage scaling; machine learning
Show Figures

Figure 1

MDPI and ACS Style

Giardino, M.; Schwyn, D.; Ferri, B.; Ferri, A. Low-Overhead Reinforcement Learning-Based Power Management Using 2QoSM. J. Low Power Electron. Appl. 2022, 12, 29. https://doi.org/10.3390/jlpea12020029

AMA Style

Giardino M, Schwyn D, Ferri B, Ferri A. Low-Overhead Reinforcement Learning-Based Power Management Using 2QoSM. Journal of Low Power Electronics and Applications. 2022; 12(2):29. https://doi.org/10.3390/jlpea12020029

Chicago/Turabian Style

Giardino, Michael, Daniel Schwyn, Bonnie Ferri, and Aldo Ferri. 2022. "Low-Overhead Reinforcement Learning-Based Power Management Using 2QoSM" Journal of Low Power Electronics and Applications 12, no. 2: 29. https://doi.org/10.3390/jlpea12020029

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop