Project Image Sensor Selection

Optimal sensor selection is combinatorially complex and hence intractable for large scale problems. Under mild conditions, greedy heuristics have proven to achieve performance within a factor of the optimal. Mutual information, a commonly used reward in information theory, can lead to "myopic" selection since it makes no use of the costs assigned to measurements. In addition, the particular choice of the visit walk greatly affects the outcome. In this project, we will examine conditions under which cost-penalized mutual information may achieve similar guarantees to that of mutual information. Lastly, we will explore ways to make informed choices of the visit walk, examine whether locally optimizing exchange algorithms can improve the results of greedy approaches and work on finding more efficient ways to compute information rewards.

People Involved: Giorgos Papachristoudis, John W. Fisher III


Optimal sensor selection is combinatorially complex and hence intractable for large scale problems. For this reason, "myopic" approaches have been used that achieve performance within a factor of the optimal and show only polynomial complexity. The two key factors in proving these bounds are submodularity and monotonicity. Submodularity is essentially the concept of "diminishing returns", where measurements matter more when little is known. Mutual information has been shown to be a submodular function. Lastly, a greedy algorithm works by selecting the measurement that maximizes the increment of the reward given what has been selected up to that point: \[ g_j = \argmax_{u \in \mathcal{V}\setminus\mathcal{G_{j-1}}} \quad f(\mathcal{G_{j-1}} \cup u) - f(\mathcal{G_{j-1}}) , \] where $\mathcal{G_{j-1}}$ are the items (measurements) selected up to the current point.
Back to the top

Fig. 1: Sequential setting and visit walks

Problem Description

We will deal with two problem settings: the batch and the sequential setting. The key differences between the two is that in the batch setting, there is just one measurement set and all the measurements are available. On the contrary, in the sequential setting, there are multiple measurement sets, with different constraints and just a specified set is explored at any point of the algorithm (see Fig. 1). In the sequential setting, we need to define a visit walk. A visit walk is the particular order, satisfying the selection constraints, that we visit measurement sets during the greedy process. In Fig. 1, we see a general sequential setting and two potential visit walks (represented with black and red dashed arrows).

For the so-called batch setting, the greedy method is at least 63% close to the optimal. In many cases though, greedy and optimal may be really close due to the structure of the problem. In the sequential setting, greedy performs half as good as the optimal in the worst case. Remarkably, we can be completely agnostic on all the available measurements but those of the current measurement set under consideration. This reduces the computational cost dramatically and still achieves a reward at least half as high as the optimal. Of course, depending on the particular choice of the visit walk the (greedy) reward can differ a lot.
Back to the top


Cost penalized rewards
Fig. 2: Penalized MI can outperform
MI in budgeted settings

In settings with limited budget and costly measurements, the adoption of a monotone reward, as the mutual information, can be quite deceiving, since it can lead to extremely greedy choices ignoring their cost and thus causing fast expenditure of the available resources. The usage of functions that take into account the cost of measurements can lead to less greedy choices. One such function is the so-called penalized mutual information: \[ PI(X;Z_{\mathcal{S}}) = I(X;Z_{\mathcal{S}}) - \lambda \sum_{u \in \mathcal{S}} c_{u} . \] The above function is essentially the mutual information regularized by the cost of a particular subset of measurements. Although submodular, it is not monotone and hence the known bounds are much looser.

We consider a toy example with limited budget of a simple HMM with $T=10$ time points. The hidden variable at each time point is a $2 \times 1$ vector indicating the $x, y$ position of a moving object. For odd time points, measurements of both positions are acquired, while only those of $x$ position for even time points. Lastly, the cost to obtain an $x$ position measurement is $10$ times higher than that of $y$ position. It turns out that the choice of penalized mutual information as a reward leads to higher information gain and lower entropy than using mutual information.
Back to the top

Choice of Visit Walk

Choosing the visit walk in sequential settings can lead to dramatic differences in greedy rewards. For example, in a toy example of an HMM with $T=4$ hidden nodes, measurements of both positions are available at the first two time points, while only that of $x$ position at the last two (see Fig. 3). We need to select one measurement from the second and one from the third time point. Choosing either the walk $\{2,3\}$ or $\{3,2\}$ makes a huge difference in the final reward (see Fig. 4). Remarkably, not only is the best walk useful but the worst as well, since it serves as the tightest upper bound of the greedy reward.
Back to the top
Fig. 3: HMM toy example Fig. 4: Choice of visit walk
greatly affects the reward

Local Exchange Algorithms

Greedy selection algorithms are notorious from being myopic. Interchange heuristics have been developed that replace a subset of elements in the greedy set with another randomly chosen subset that achieves higher reward. Although, these algorithms were proven to perform more poorly in the worst case than the greedy ones, we will explore their performance in specific problem structures and in conjunction with using the known greedy methods.
Back to the top

Computational Cost of Reward Evaluation

In all the above cases, complexity has been measured on the assumption that value of the incremental reward is given by an oracle. In reality though, there is a considerable computational cost in evaluating the reward. As the number of hidden dimensions and measurement set size grows, this cost can prove prohibitive. In this project, we will make use of the conditional independence assumption between measurements and the crucial fact that information gain is only determined through the variable that a measurement links to. By using this fact, we might be able to reduce the computational cost dramatically, since we will only need to evaluate marginal variances instead of the full covariance at each step.
Back to the top


We consider a four story building comprised of 24 nodes. The hidden quantities we wish to estimate are displacement and velocity across the three directions. We assume that earthquake (excitation) is applied uniformly on the six ground nodes across the $x$ and $y$ direction. Our goal is to place efficiently a specified number of sensors, that measure acceleration, at each time point. We additionally assume that four stable ground sensors measure displacement. We compare three different approaches; one where sensors are placed randomly, one where sensors are placed greedily incorporating excitation in the model and one where sensors are placed greedily assuming no excitation. The results are depicted on Fig. 5, where we see that greedy outperforms random selection. Lines in magenta color indicate the building state, had we been able to place sensors on all nodes across the three directions.
Sensor selection through time Fig. 5: Greedy outperforms random selection

Back to the top

Refereed Workshop Papers
1 result
[1]G. Papachristoudis, J. W. Fisher III, "Theoretical Guarantees on Penalized Information Gathering", in Statistical Signal Processing Workshop (SSP), 2012, pp. 301–304. [bibtex]