Gengshan Yang1 | Deva Ramanan1,2 |
---|
Appearance-based detectors achieve remarkable performance on common scenes, benefiting from high-capacity models and massive annotated data, but tend to fail for scenarios that lack training data. Geometric motion segmentation algorithms, however, generalize to novel scenes, but have yet to achieve comparable performance to appearance-based ones, due to noisy motion estimations and degenerate motion configurations. To combine the best of both worlds, we propose a modular network, whose architecture is motivated by a geometric analysis of what independent object motions can be recovered from an ego-motion field. It takes two consecutive frames as input and predicts segmentation masks for the background and multiple rigidly moving objects, which are then parameterized by 3D rigid transformations. Our method achieves state-of-the-art performance for rigid motion segmentation on KITTI and Sintel. The inferred rigid motions lead to a significant improvement for depth and scene flow estimation.
@inproceedings{yang2021rigidmask, title={Learning to Segment Rigid Motions from Two Frames}, author={Yang, Gengshan and Ramanan, Deva}, booktitle={CVPR}, year={2021} }
This work was supported by the CMU Argo AI Center for Autonomous Vehicle Research. We thank Rui Zhu for proving the code of single-image camera intrinsics estimation. We thank Jason Zhang, Tarasha Khurana, Jessica Lee and many others for their useful feedback.