In this paper, we develop a generative probabilistic model for temporally consistent superpixels in video sequences. Unlike supervoxel methods, the same temporal superpixel in different frames tracks the same part of an underlying object. Our method explicitly models the flow between frames with a bilateral Gaussian process and uses this information to propagate superpixels in an online fashion. We present four new metrics to measure performance of a temporal superpixel representation and find that our method outperforms supervoxel methods.
People Involved: Jason Chang, Donglai Wei, John W. Fisher III
Code for our temporal superpixel (TSP) algorithm can be found here.
In the following video, we show some results obtained using our TSP representation.
Refereed Conference Papers1 result
|||J. Chang, D. Wei and J. W. F. III, "A Video Representation Using Temporal Superpixels", in IEEE Computer Vision and Pattern Recognition Conference on Computer Vision, 2013.|