MrNeRF (@janusch_patas)
2025-06-20 | โค๏ธ 62 | ๐ 9
Particle-Grid Neural Dynamics for Learning Deformable Object Models from RGB-D Videos
Abstract (excerpt): Our particle-grid model captures global shape and motion information while predicting dense particle movements, enabling the modeling of objects with varied shapes and materials.
- Particles represent object shapes, while the spatial grid discretizes the 3D space to ensure spatial continuity and enhance learning efficiency.
- Coupled with Gaussian Splattings for visual rendering, our framework achieves a fully learning-based digital twin of deformable objects and generates 3D action-conditioned videos.
Through experiments, we demonstrate that our model learns the dynamics of diverse objectsโsuch as ropes, cloths, stuffed animals, and paper bagsโfrom sparse-view RGB-D recordings of robot-object interactions, while also generalizing at the category level to unseen instances.
Our approach outperforms state-of-the-art learning-based and physics-based simulators, particularly in scenarios with limited camera views.
๐ Related
See similar notes in domain-vision-3d, domain-rendering, domain-robotics, domain-simulation, domain-dev-tools
Tags
type-paper domain-vision-3d, domain-rendering, domain-robotics, domain-simulation, domain-dev-tools