Andrew Davison (@AjdDavison)

2025-12-19 | โค๏ธ 202 | ๐Ÿ” 23


Using powerful multi-view 3D vision transformer models like ฯ€ยณ and Depth Anything 3 for 30FPS real-time tracking of objects and scenes via KV caching. Dyson Robotics Lab at Imperial College London.


์ธ์šฉ ํŠธ์œ—

Marwan Taher (@marwan_ptr)

How can we run reconstruction models like ฯ€ยณ and Depth Anything 3 in real-time?

We present KV-Tracker, a training-free approach, for real-time tracking of scenes and objects. Achieving up to 30 FPS!

With @alzugarayign, @makezur, @XinKong_IC and @AjdDavison https://t.co/0OKWljzfek

์›๋ณธ ํŠธ์œ—

๐ŸŽฌ ์˜์ƒ

Tags

3D Robotics AI-ML