Original Tweet
If youโve ever tried to create 3DGS scenes from photos taken with different cameras or lighting conditions, you know the pain. The colors shift, the exposure varies, and your splats ends up lookingโฆ well, weird.
Most current solutions throw neural networks at the problem. They work on training data but fall apart on novel views. However, @NVIDIAAIDevโs paper that just dropped, PPISP (Physically-Plausible Image Signal Processing), takes a different approach. It models the actual camera pipeline.
TLDR: It separates what changes from what doesnโt. Camera-specific things like vignetting and sensor response stay constant. Per-frame things like exposure and white balance get their own parameters.
Because it models real camera physics, you can manually adjust exposure or white balance like you would in Lightroom.
It then predicts what exposure/WB settings a real camera would use for new viewpoints. Think of it as auto-exposure for 3D reconstructions.
The results beats SOTA methods on standard benchmark.
You can learn more from Radiance Fields (Gaussian Splatting and NeRFs) full coverage on their webite: https://radiancefields.com/nvidia-announces-ppisp-for-radiance-fields
๐ ์๋ณธ ๋งํฌ
๋ฏธ๋์ด
![]()
๐ Related
- eag-pt-emission-aware-gaussians-and-path-tracing-for-indoorโ281012 โ ์ฃผ์ : Gaussian Splatting, Nerf, Reconstruction, Relighting
- gr3en-generative-relighting-for-3d-environments-057299 โ ์ฃผ์ : Nerf, Reconstruction, Relighting
- lod-structured-3d-gaussian-splatting-for-streaming-video-rec-426978 โ ์ฃผ์ : Gaussian Splatting, Nerf, Reconstruction
- eag-pt-emission-aware-gaussians-and-path-tracing-for-indoor-scene-reconstruction โ ์ฃผ์ : Gaussian Splatting, Reconstruction, Relighting
- luxremix-lighting-decomposition-and-remixing-for-indoor-scenes โ ์ฃผ์ : Gaussian Splatting, Reconstruction, Relighting