MrNeRF (@janusch_patas)

2025-03-04 | โค๏ธ 186 | ๐Ÿ” 13


Difix3D+: Improving 3D Reconstructions with Single-Step Diffusion Models

Contributions: (i) We show how to adapt 2D diffusion models to remove artifacts resulting from rendering a 3D neural representation, with minimal effort. The fine-tuning process takes only a few hours on a single consumer graphics card. Despite the short training time, the same model is powerful enough to remove artifacts in rendered images from both implicit representations such as NeRF and explicit representations like 3DGS.

(ii) We propose an update pipeline that progressively refines the 3D representation by distilling back the improved novel views, thus ensuring multi-view consistency and significantly enhanced quality of the 3D representation. Compared to contemporary methods [26, 72] that query a diffusion model at each training time step, our approach is >10ร— faster.

(iii) We demonstrate how single-step diffusion models enable near real-time post-processing that further improves novel view synthesis quality.

(iv) We evaluate our approach across different datasets and present SoTA results, improving PSNR by >1dB and FID by >2ร— on average.

๋ฏธ๋””์–ด

video


Auto-generated - needs manual review

Tags

domain-vision-3d domain-genai domain-rendering domain-visionos