Next Article in Journal
Multi-Feature Classification of Multi-Sensor Satellite Imagery Based on Dual-Polarimetric Sentinel-1A, Landsat-8 OLI, and Hyperion Images for Urban Land-Cover Classification
Next Article in Special Issue
Prototyping a Web-of-Energy Architecture for Smart Integration of Sensor Networks in Smart Grids Domain
Previous Article in Journal
Human Identification by Cross-Correlation and Pattern Matching of Personalized Heartbeat: Influence of ECG Leads and Reference Database Size
Previous Article in Special Issue
A Novel Adaptive Modulation Based on Nondata-Aided Error Vector Magnitude in Non-Line-Of-Sight Condition of Wireless Sensor Network
Article Menu
Issue 2 (February) cover image

Export Article

Open AccessArticle
Sensors 2018, 18(2), 375;

Self-Learning Power Control in Wireless Sensor Networks

Electrical Engineering Department, Eindhoven University of Technology, De Zaale (internal address: Groene Loper 19), 5612 AJ Eindhoven, The Netherlands
Data Science Centre, University of Derby, Lonsdale House, Quaker Way, Derby DE1 3HD, UK
Author to whom correspondence should be addressed.
Received: 14 December 2017 / Revised: 19 January 2018 / Accepted: 21 January 2018 / Published: 27 January 2018
(This article belongs to the Special Issue Smart Communication Protocols and Algorithms for Sensor Networks)
Full-Text   |   PDF [750 KB, uploaded 27 January 2018]   |  


Current trends in interconnecting myriad smart objects to monetize on Internet of Things applications have led to high-density communications in wireless sensor networks. This aggravates the already over-congested unlicensed radio bands, calling for new mechanisms to improve spectrum management and energy efficiency, such as transmission power control. Existing protocols are based on simplistic heuristics that often approach interference problems (i.e., packet loss, delay and energy waste) by increasing power, leading to detrimental results. The scope of this work is to investigate how machine learning may be used to bring wireless nodes to the lowest possible transmission power level and, in turn, to respect the quality requirements of the overall network. Lowering transmission power has benefits in terms of both energy consumption and interference. We propose a protocol of transmission power control through a reinforcement learning process that we have set in a multi-agent system. The agents are independent learners using the same exploration strategy and reward structure, leading to an overall cooperative network. The simulation results show that the system converges to an equilibrium where each node transmits at the minimum power while respecting high packet reception ratio constraints. Consequently, the system benefits from low energy consumption and packet delay. View Full-Text
Keywords: wireless sensor network; transmission power control; Q-learning; reinforcement learning; game theory; multi-agent; energy efficiency; quality of service wireless sensor network; transmission power control; Q-learning; reinforcement learning; game theory; multi-agent; energy efficiency; quality of service

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).

Share & Cite This Article

MDPI and ACS Style

Chincoli, M.; Liotta, A. Self-Learning Power Control in Wireless Sensor Networks. Sensors 2018, 18, 375.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics



[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top