PREIM3D: 3D Consistent Precise Image Attribute Editing from a Single Image

  • Jianhui Li
  • Jianmin Li
  • Haoji Zhang
  • Shilong Liu
  • Zhengyi Wang
  • Zihao Xiao
  • Kaiwen Zheng
  • Jun Zhu
  • Tsinghua University
  • Accepted in CVPR 2023

Abstract

We study the 3D-aware image attribute editing problem in this paper, which has wide applications in practice. Recent methods solved the problem by training a shared encoder to map images into a 3D generator's latent space or by per-image latent code optimization and then edited images in the latent space. Despite their promising results near the input view, they still suffer from the 3D inconsistency of produced images at large camera poses and imprecise image attribute editing, like affecting unspecified attributes during editing. For more efficient image inversion, we train a shared encoder for all images. To alleviate 3D inconsistency at large camera poses, we propose two novel methods, an alternating training scheme and a multi-view identity loss, to maintain 3D consistency and subject identity. As for imprecise image editing, we attribute the problem to the gap between the latent space of real images and that of generated images. We compare the latent space and inversion manifold of GAN models and demonstrate that editing in the inversion manifold can achieve better results in both quantitative and qualitative evaluations. Extensive experiments show that our method produces more 3D consistent images and achieves more precise image editing than previous work. Source code and pretrained models can be found on our project page: https://mybabyyh.github.io/Preim3D/

Approach


original image

Results


original image

Real-time Editing



BibTeX

             
@inproceedings{li2023PREIM3D,
  title={PREIM3D: 3D Consistent Precise Image Attribute Editing from a Single Image},
  author={Jianhui Li,Jianmin Li,Haoji Zhang,Shilong Liu,Zhengyi Wang,Zihao Xiao,Kaiwen Zheng,Jun Zhu},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2023}
}