Andrew Davison (@AjdDavison)
2025-12-19 | โค๏ธ 202 | ๐ 23
Using powerful multi-view 3D vision transformer models like ฯยณ and Depth Anything 3 for 30FPS real-time tracking of objects and scenes via KV caching. Dyson Robotics Lab at Imperial College London.
์ธ์ฉ ํธ์
Marwan Taher (@marwan_ptr)
How can we run reconstruction models like ฯยณ and Depth Anything 3 in real-time?
We present KV-Tracker, a training-free approach, for real-time tracking of scenes and objects. Achieving up to 30 FPS!
With @alzugarayign, @makezur, @XinKong_IC and @AjdDavison https://t.co/0OKWljzfek
๐ฌ ์์