📚 세현's Vault

🌍 도메인

  • 🔮3D-Vision
  • 🎨Rendering
  • 🤖Robotics
  • 🧠LLM
  • 👁️VLM
  • 🎬GenAI
  • 🥽XR
  • 🎮Simulation
  • 🛠️Dev-Tools
  • 💰Crypto
  • 📈Finance
  • 📋Productivity
  • 📦기타

📄 Papers

  • 📚전체 논문172
Home

❯

bookmarks

❯

our new paper reflex based open vocabulary navigation without prior knowledge us

our-new-paper-reflex-based-open-vocabulary-navigation-without-prior-knowledge-us

2024년 8월 22일1 min read

  • Robotics
  • web-graphics

Kento Kawaharazuka / 河原塚 健人 (@KKawaharazuka)

2024-08-22 | ❤️ 92 | 🔁 14


Our new paper “Reflex-Based Open-Vocabulary Navigation without Prior Knowledge Using Omnidirectional Camera and Multiple Vision-Language Models” has been accepted at Advanced Robotics!

website - https://haraduka.github.io/omnidirectional-vlm/ https://x.com/KKawaharazuka/status/1826549893235626175/video/1

미디어

video


Tags

domain-robotics domain-ai-ml domain-llm domain-vlm domain-web-graphics


그래프 뷰

  • Kento Kawaharazuka / 河原塚 健人 (@KKawaharazuka)
  • 미디어
  • Tags

백링크

  • domain-Robotics

Created with Quartz v4.5.2 © 2026

  • GitHub
  • Sehyeon Park