Unsupervised Video Segmentation Algorithms Based On Flexibly Regularized Mixture Models

with C. Launay and R. Coen-Cagli.

Proceedings Version Pre-Print Version

    Claire Launay applied our algorithm based on mixture models to the task of video segmentation by propagating segmentation information across frames !

    1. Launay, C., Vacher, J. & Coen-Cagli, R. Unsupervised Video Segmentation Algorithms Based On Flexibly Regularized Mixture Models. in 2022 IEEE International Conference on Image Processing (ICIP) 4073–4077 (IEEE, 2022).


    We propose a family of probabilistic segmentation algorithms for videos that rely on a generative model capturing static and dynamic natural image statistics. Our framework adopts flexibly regularized mixture models (FlexMM) [1], an efficient method to combine mixture distributions across different data sources. FlexMMs of Student-t distributions successfully segment static natural images, through uncertainty-based information sharing between hidden layers of CNNs. We further extend this approach to videos and exploit FlexMM to propagate segment labels across space and time. We show that temporal propagation improves temporal consistency of segmentation, reproducing qualitatively a key aspect of human perceptual grouping. Besides, Student-t distributions can capture statistics of optical flows of natural movies, which represent apparent motion in the video. Integrating these motion cues in our temporal FlexMM further enhances the segmentation of each frame of natural movies. Our probabilistic dynamic segmentation algorithms thus provide a new framework to study uncertainty in human dynamic perceptual segmentation.

    © 2019. All rights reserved.

    Powered by Hydejack v8.4.0