GaussCtrl: Multi-View Consistent Text-Driven 3D Gaussian Splatting Editing

* denotes equal contribution
  1. University of Oxford
  2. Mohamed bin Zayed University of Artificial Intelligence
Paper
Code (To be released)
Data (To be released)
*Selected Examples

Abstract

We propose GaussCtrl, a text-driven method to edit a 3D scene reconstructed by the 3D Gaussian Splatting (3DGS). Our method first renders a collection of images by using the 3DGS and edits them by using a pre-trained 2D diffusion model (ControlNet) based on the input prompt, which is then used to optimise the 3D model. Our key contribution is multi-view consistent editing, which enables editing all images together instead of iteratively editing one image while updating the 3D model as in previous works. It leads to faster editing as well as higher visual quality. This is achieved by the two terms: (a) depth-conditioned editing that enforces geometric consistency across multi-view images by leveraging naturally consistent depth maps. (b) attention-based latent code alignment that unifies the appearance of edited images by conditioning their editing to several reference views through self and cross-view attention between images' latent representations. Experiments demonstrate that our method achieves faster editing and better visual results than previous state-of-the-art methods.

Method

Our method enables multi-view consistent 3D editing conditioned on depth. In a nutshell, given an to-be-edited 3D Gaussian Splatting (3DGS) model, the method can be divided into 3 steps:

  1. Render RGB and depth images from the model.
  2. Reverse all the images to latent codes based on ControlNet, conditioned on depth.
  3. Align all the views to several randomly selected reference views.
  4. Keep training the given to-be-edited 3DGS model to finish the editing.

Method Figure
We also provide an animated illustration.

Depth-conditioned Image Editing

Given a pre-trained 3D model (3DGS/NeRF), RGB and depth images are first rendered. Latent codes are obtained through DDIM inversion conditioned on depth and a description text prompt.
By doing so, multi-view consistency during editing is greatly improved, because the original images have naturally consistent colour and geometry.

Attention-based Latent Code Alignment

During the generation process, we select a few reference views randomly and align all other views to these reference views based on our Attention-based Latent Code Alignment module to encourage multi-view consistency during generation.

Results

Here, we include more examples besides those in our paper.

Citation

If you use this work or find it helpful, please consider citing: (bibtex)

@inproceedings{gaussctrl2024,
author = {Wu, Jing and Bian, Jia-Wang and Li, Xinghui and Wang, Guangrun and Reid, Ian and Torr, Philip and Prisacariu, Victor},
title = {{GaussCtrl: Multi-View Consistent Text-Driven 3D Gaussian Splatting Editing}},
booktitle = {ArXiv},
year = {2024},
}

Acknowledgement

Special thanks to Zirui Wang for discussion.