DailyPapers (@HuggingPapers)

2026-01-30 | โค๏ธ 265 | ๐Ÿ” 45 | ๐Ÿ’ฌ 5


DynamicVLA

A compact 0.4B Vision-Language-Action model that finally lets robots manipulate moving objects in real-time, closing the perception-execution gap with Continuous Inference and Latent-aware Action Streaming. https://x.com/HuggingPapers/status/2017094507402318169/video/1


๐Ÿ”— ๋งํฌ


๋ฏธ๋””์–ด

๐ŸŽฌ ์˜์ƒ


Tags

AI-ML Robotics