๐Ÿ“š ์„ธํ˜„'s Vault

๐ŸŒ ๋„๋ฉ”์ธ

  • ๐Ÿ”ฎ3D-Vision
  • ๐ŸŽจRendering
  • ๐Ÿค–Robotics
  • ๐Ÿง LLM
  • ๐Ÿ‘๏ธVLM
  • ๐ŸŽฌGenAI
  • ๐ŸฅฝXR
  • ๐ŸŽฎSimulation
  • ๐Ÿ› ๏ธDev-Tools
  • ๐Ÿ’ฐCrypto
  • ๐Ÿ“ˆFinance
  • ๐Ÿ“‹Productivity
  • ๐Ÿ“ฆ๊ธฐํƒ€

๐Ÿ“„ Papers

  • ๐Ÿ“š์ „์ฒด ๋…ผ๋ฌธ172
Home

โฏ

bookmarks

โฏ

fine tuning mistral 7b with lora on a 32 gb m1 laptop in mlx updated example use

fine-tuning-mistral-7b-with-lora-on-a-32-gb-m1-laptop-in-mlx-updated-example-use

2023๋…„ 12์›” 15์ผ1 min read

  • ๊ธฐํƒ€

Awni Hannun (@awnihannun)

2023-12-15 | โค๏ธ 1135 | ๐Ÿ” 150


Fine-tuning Mistral 7B with LoRA on a 32 GB M1 (laptop!) in MLX

https://github.com/ml-explore/mlx-examples/tree/main/lora

Updated example uses less RAM + support for custom datasets ๐Ÿš€ https://x.com/awnihannun/status/1735782998623261071/video/1

๋ฏธ๋””์–ด

video


Tags

domain-๊ธฐํƒ€


๊ทธ๋ž˜ํ”„ ๋ทฐ

  • Awni Hannun (@awnihannun)
  • ๋ฏธ๋””์–ด
  • Tags

๋ฐฑ๋งํฌ

  • domain-๊ธฐํƒ€

Created with Quartz v4.5.2 ยฉ 2026

  • GitHub
  • Sehyeon Park