|Aerial Reconstructions via Probabilistic Data Fusion|
We propose an integrated probabilistic model for multi-modal fusion of aerial imagery, LiDAR data, and (optional) GPS measurements. The model allows for analysis and dense reconstruction (in terms of both geometry and appearance) of large 3D scenes. An advantage of the approach is that it explicitly models uncertainty and allows for missing data. As compared with image-based methods, dense reconstructions of complex urban scenes are feasible with fewer observations. Moreover, the proposed model allows one to estimate absolute scale and orientation, and reason about other aspects of the scene, e.g., detection of moving objects. As formulated, the model lends itself to massively-parallel computing. We exploit this in an efficient inference scheme that utilizes both general purpose and domain-specific hardware components. We demonstrate results on large-scale reconstruction of urban terrain from LiDAR and aerial photography data.
People Involved: Randi Cabezas, Oren Freifeld, Guy Rosman, John W. Fisher III
In the following video, we highlight the main advantages of the proposed approach. We also showcase inference procedures and how the model can be used to answer queries beyond reconstructions.
Paper, supplemental materials and code can be found here.
Results can be visualized here.
|||R. Cabezas et al., "Aerial Reconstructions via Probabilistic Data Fusion", in IEEE Computer Vision and Pattern Recognition Conference on Computer Vision (CVPR), 2014.|