๐Ÿ“š ์„ธํ˜„'s Vault

๐ŸŒ ๋„๋ฉ”์ธ

  • ๐Ÿ”ฎ3D-Vision
  • ๐ŸŽจRendering
  • ๐Ÿค–Robotics
  • ๐Ÿง LLM
  • ๐Ÿ‘๏ธVLM
  • ๐ŸŽฌGenAI
  • ๐ŸฅฝXR
  • ๐ŸŽฎSimulation
  • ๐Ÿ› ๏ธDev-Tools
  • ๐Ÿ’ฐCrypto
  • ๐Ÿ“ˆFinance
  • ๐Ÿ“‹Productivity
  • ๐Ÿ“ฆ๊ธฐํƒ€

๐Ÿ“„ Papers

  • ๐Ÿ“š์ „์ฒด ๋…ผ๋ฌธ172
Home

โฏ

bookmarks

โฏ

LoRA vs Full Fine tuning: An Illusion of Equivalence

LoRA vs Full Fine-tuning: An Illusion of Equivalence

2024๋…„ 11์›” 07์ผ1 min read

  • LLM
  • fine-tuning

Yam Peleg (@Yampeleg)

2024-11-07 | โค๏ธ 1341 | ๐Ÿ” 163


a very hot paper just dropped.. https://x.com/Yampeleg/status/1854608593737466221/photo/1

๋ฏธ๋””์–ด

photo

์ธ์šฉ


์ธ์šฉ ํŠธ์œ—

kalomaze (@kalomaze)

https://t.co/l0XSIgjnNe

์›๋ณธ ํŠธ์œ—

Tags

domain-๊ธฐํƒ€


๊ทธ๋ž˜ํ”„ ๋ทฐ

  • Yam Peleg (@Yampeleg)
  • ๋ฏธ๋””์–ด
  • ์ธ์šฉ
  • ์ธ์šฉ ํŠธ์œ—
  • Tags

๋ฐฑ๋งํฌ

  • domain-๊ธฐํƒ€
  • enriched-codex

Created with Quartz v4.5.2 ยฉ 2026

  • GitHub
  • Sehyeon Park