3D Shape Completion from Sparse Point Clouds Using Deep Learning
Abstract
We tackle the problem of generating dense representations from sparse and partial point clouds. We achieve this with a data-driven approach which learns to complete incoming sets of 3D points in a fully-supervised manner.
To this end, we first prepare a suitable dataset containing partial scans, each one correlated with complete ground-truth representations. We base this on the widely used ModelNet40 dataset which features high-quality CAD scans of 40 different object categories.
Our deep learning architecture directly operates on point clouds; converting them into a different representation is not needed. This sets our method apart from state-of-the-art approaches that expect 3D data in a regular grid format.
The results show that our method, while being easier to train, outperforms more involved approaches, giving us the ability to efficiently predict dense shapes from sparse and partial point clouds.