We propose GaussCtrl, a text-driven method to edit a 3D scene reconstructed by the 3D Gaussian Splatting (3DGS). Our method first renders a collection of images by using the 3DGS and edits them by using a pre-trained 2D diffusion model (ControlNet) based on the input prompt, which is then used to optimise the 3D model. Our key contribution is multi-view consistent editing, which enables editing all images together instead of iteratively editing one image while updating the 3D model as in previous works. It leads to faster editing as well as higher visual quality. This is achieved by the two terms: (a) depth-conditioned editing that enforces geometric consistency across multi-view images by leveraging naturally consistent depth maps. (b) attention-based latent code alignment that unifies the appearance of edited images by conditioning their editing to several reference views through self and cross-view attention between images' latent representations. Experiments demonstrate that our method achieves faster editing and better visual results than previous state-of-the-art methods.
Our method enables multi-view consistent 3D editing conditioned on depth. In a nutshell, given an to-be-edited 3D Gaussian Splatting (3DGS) model, the method can be divided into 3 steps:
Given a pre-trained 3D model (3DGS/NeRF), RGB and depth images are first rendered. Latent codes are obtained through DDIM inversion conditioned on depth and a description text prompt.
By doing so, multi-view consistency during editing is greatly improved, because the original images have naturally consistent colour and geometry.
During the generation process, we select a few reference views randomly and align all other views to these reference views based on our Attention-based Latent Code Alignment module to encourage multi-view consistency during generation.
Here, we include more examples besides those in our paper.
If you use this work or find it helpful, please consider citing: (bibtex)
Special thanks to Zirui Wang for discussion.