PU-GAN: a Point Cloud Upsampling Adversarial Network

Ruihui Li1
Xianzhi Li1,3
Chi-Wing Fu1,3
Daniel Cohen-Or2
Pheng-Ann Heng1,3
1The Chinese University of Hong Kong
2Tel Aviv University
3 Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China

Code [GitHub]
ICCV 2019 [Paper]




Abstract

Point clouds acquired from range scans are often sparse, noisy, and non-uniform. This paper presents a new point cloud upsampling network called PU-GAN, which is formulated based on a generative adversarial network (GAN), to learn a rich variety of point distributions from the latent space and upsample points over patches on object surfaces. To realize a working GAN network, we construct an up-down-up expansion unit in the generator for upsampling point features with error feedback and self-correction, and formulate a self-attention unit to enhance the feature integration. Further, we design a compound loss with adversarial, uniform and reconstruction terms, to encourage the discriminator to learn more latent patterns and enhance the output point distribution uniformity. Qualitative and quantitative evaluations demonstrate the quality of our results over the state-of-the-arts in terms of distribution uniformity, proximity-to-surface, and 3D reconstruction quality.


Overview




Paper and Supplementary Material

Ruihui Li, Xianzhi Li, Chi-Wing Fu,
Daniel Cohen-Or, Pheng-Ann Heng.

PU-GAN: a Point Cloud Upsampling Adversarial Network.
In ICCV, 2019.
[arxiv] [paper] [supp]



Surface reconstruction results

We show point set upsampling and surface reconstruction results for various models below. Comparing the results produced by (f) our method and by (c-e) others, against (b) the ground truth points that are uniformly-sampled on the original testing models, we can see that other methods tend to produce more noisy and nonuniform point sets, thus leading to more artifacts in the reconstructed surfaces. See, particularly, the blown-up views, showing that PU-GAN can produce more fine-grained details in the upsampled results,e.g., elephant’s nose (top) and tiger’s tail (bottom).




Results on real scans

We also apply our PU-GAN to point clouds LiDAR point clouds (downloaded from KITTI dataset). From the first row, we can see the sparsity and non-uniformity of the inputs. Our PU-GAN can still fill some holes and output more uniform points in the results; please see the supplemental material for more results.




Other Experiments

We also show the results of using PU-GAN to upsample point sets of increasing Gaussian noise levels, indicating the robustness of PU-GAN to noise and sparsity. Moreover, the results of upsampling input point sets of different sizes demonstrate that our method is stable even for input with only 512 points.

Upsampling point sets of varying noise levels


Upsampling point sets of varying sizes



Citation

If PU-GAN is useful for your research, please consider citing:

@inproceedings{li2019pugan,
  title = {{PU-GAN}: a Point Cloud Upsampling Adversarial Network},
  author = {Li, Ruihui and Li, Xianzhi and Fu, Chi-Wing and Cohen-Or, Daniel and Heng, Pheng-Ann},
  booktitle = {{IEEE} International Conference on Computer Vision ({ICCV})},
  year = {2019},




Acknowledgments

We thank anonymous reviewers for the valuable comments. We also thank xiao tang for collecting the dataset and hao xu for rendering the vivid figures. The work is supported by the 973 Program (Proj. No. 2015CB351706), the National Natural Science Foundation of China with Proj. No. U1613219, the Research Grants Council of the Hong Kong Special Administrative Region (No. CUHK 14203416 & 14201717), and the Israel Science Foundation grants 2366/16 and 2472/7.