Intelligent Sensing in Dynamic Environments Using Markov Decision Process
AbstractIn a network of low-powered wireless sensors, it is essential to capture as many environmental events as possible while still preserving the battery life of the sensor node. This paper focuses on a real-time learning algorithm to extend the lifetime of a sensor node to sense and transmit environmental events. A common method that is generally adopted in ad-hoc sensor networks is to periodically put the sensor nodes to sleep. The purpose of the learning algorithm is to couple the sensor’s sleeping behavior to the natural statistics of the environment hence that it can be in optimal harmony with changes in the environment, the sensors can sleep when steady environment and stay awake when turbulent environment. This paper presents theoretical and experimental validation of a reward based learning algorithm that can be implemented on an embedded sensor. The key contribution of the proposed approach is the design and implementation of a reward function that satisfies a trade-off between the above two mutually contradicting objectives, and a linear critic function to approximate the discounted sum of future rewards in order to perform policy learning.
Scifeed alert for new publicationsNever miss any articles matching your research from any publisher
- Get alerts for new papers matching your research
- Find out the new papers from selected authors
- Updated daily for 49'000+ journals and 6000+ publishers
- Define your Scifeed now
Nanayakkara, T.; Halgamuge, M.N.; Sridhar, P.; Madni, A.M. Intelligent Sensing in Dynamic Environments Using Markov Decision Process. Sensors 2011, 11, 1229-1242.
Nanayakkara T, Halgamuge MN, Sridhar P, Madni AM. Intelligent Sensing in Dynamic Environments Using Markov Decision Process. Sensors. 2011; 11(1):1229-1242.Chicago/Turabian Style
Nanayakkara, Thrishantha; Halgamuge, Malka N.; Sridhar, Prasanna; Madni, Asad M. 2011. "Intelligent Sensing in Dynamic Environments Using Markov Decision Process." Sensors 11, no. 1: 1229-1242.