Matthias Niessner (@MattNiessner)

2024-12-17 | โค๏ธ 123 | ๐Ÿ” 31


๐Ÿ“ข๐Ÿ“ข๐†๐€๐…: ๐†๐š๐ฎ๐ฌ๐ฌ๐ข๐š๐ง ๐€๐ฏ๐š๐ญ๐š๐ซ ๐‘๐ž๐œ๐จ๐ง๐ฌ๐ญ๐ซ๐ฎ๐œ๐ญ๐ข๐จ๐ง ๐Ÿ๐ซ๐จ๐ฆ ๐Œ๐จ๐ง๐จ๐œ๐ฎ๐ฅ๐š๐ซ ๐•๐ข๐๐ž๐จ๐ฌ ๐ฏ๐ข๐š ๐Œ๐ฎ๐ฅ๐ญ๐ข-๐ฏ๐ข๐ž๐ฐ ๐ƒ๐ข๐Ÿ๐Ÿ๐ฎ๐ฌ๐ข๐จ๐ง๐Ÿ“ข๐Ÿ“ข

We reconstruct animatable Gaussian head avatars from monocular videos captured by commodity devices such as smartphones.

Key idea: distill reconstruction constraints from a multi-view head diffusion model to complete unobserved regions.

https://tangjiapeng.github.io/projects/GAF/ https://www.youtube.com/watch?v=QuIYTljvhyg&feature=youtu.be

Great work by @jiapeng_tang @davidedavoli @TobiasKirschst1 @liamschoneveld

๋ฏธ๋””์–ด

video


Tags

domain-vision-3d domain-genai