Track:
All Papers
Downloads:
Abstract:
Optimal use of energy is a primary concern in field-deployable sensor networks. Artificial intelligence algorithms offer the capability to improve the performance of sensor networks in dynamic environments by minimizing energy utilization while not compromising overall performance. However, they have been used only to a limited extent in sensor networks primarily due to their expensive computing requirements. We describe the use of Markov decision processes (MDPs) for the adaptive control of sensor sampling rates in a sensor network used for human health monitoring. The MDP controller is designed to gather optimal information about the patient’s health while guaranteeing a minimum lifetime of the system. At every control step, the MDP controller varies the frequency at which the data is collected according to the criticality of the patient’s health at that time. We present a stochastic model that is used to generate the optimal policy offline. In cases where a model of the observed process is not available a-priori, we describe a Q-learning technique to learn the control policy, by using a pre-existing master controller. Simulation results that illustrate the performance of the controller are presented.