Project Image Value Independent Information Models

Assessing the value of information for sensors in the context of distributed systems presents a challenging problem. Information driven approaches to active sensor management consider the expected information gain with regard to an explicit inference problem. However, for complex measurement models, simulation via Monte Carlo methods may be necessary to estimate the necessary quantities. This poses a computational bottleneck for large scale systems and planning over long time horizons.

In this project, we will examine the conditions that hold in a general family of distributions called the exponential family when entropy depends only on the size of the data and derive bounds for the entropy when the above conditions are not met before the acquirement of actual measurements.


People Involved: Giorgos Papachristoudis, John W. Fisher III

In many distributed sensing problems, resource constraints motivate intelligent allocation of sensing assets. Such resource constraints can be limited communication bandwidth and energy constraints due to battery limitations. Active approaches seek to manage sensing resources so as to maximize a utility function while incorporating constraints on resource expenditures. Several of them consider the expected mutual information as a utility function.

In simple models (e.g. when observations and parameters are modeled with a Kalman Filer where the Gaussian assumption is taken into account), information gain depends solely on the number of observations rather than the values. For more complex measurement models though, Monte Carlo simulation might be needed to estimate the necessary quantities something that poses serious computational constraints when we have large sets of sensors and long time horizons.

In this project, we would like to first find the conditions under which utility functions (as information gain) are independent on the values of observations. Since this condition is expected to be restrictive, the next step is to determine boundaries for the information gain even for models that are independent on the measurement values. For this purpose, we are going to use, a family of distributions called exponential. As the posterior entropy of the parameter of interest given the data becomes a function of the data, the last step would be to able to determine concentration bounds for the entropy.

The above findings will first give us a more comprehensive picture of what should we expect from a sensor network (modeled in a general way) before we even obtain any measurements at all and secondly will dramatically reduce computation complexity.