Pablo Vela (@pablovelagomez1)
2025-12-24 | โค๏ธ 117 | ๐ 13
Its done! I updated the @rerundotio demo to work with multiview videos. Right now its fully hooked up to the exoego-forge format.
The great thing about Sam3 is its robustness. I didnโt need to create separate sessions for each camera. I simply generated a single instance, used a text prompt, and switched between cameras for an incremental update.
Along with all of this, Iโm doing TSDF fusion on the provided depth maps + segmentation images to color the person over time.
SAM3 is such a gift to the opensource community, and Iโm going to keep exploring what other cool things I can do with it =]
๋ฏธ๋์ด
![]()
๐ Related
- thx-akhaliq-for-sharing-control-the-camera-control-the
- introducing-reverse-claude-code-where-claude-tells-you-what
- nano-banana-pro-can-generate-360-degree-visuals-so-i-wanted
- you-cant-3d-reconstruct-glass-from-images-wrong-thanks-for
- how-to-setup-a-multi-agent-system-bookmark-it-the-trading
์ธ์ฉ ํธ์
Pablo Vela (@pablovelagomez1)
Spent some more time with SAM 3. I really wanted to make it work for pointcloud/3d segmentation, so I did ๐. SAM 3 is really something, with camera parameters and images, 3d segmentation and labeling just became 10x easier.
Here I show the example of doing segmentation of a person, a yellow mustard bottle, and a book on the table.
It basically consists of running a prompt-only forward pass of SAM 3 on each image, then taking the provided depth maps of the dataset and generating a TSDF fused mesh, and updating the color based on the predicted segmentation mask
Now just need to extend it to video.
๐ฌ ์์