We're sharing 2 Colab notebooks with free GPUs to finetune Llama-3 and deploy it to Ollama! You'll create a custom chatbot just like ChatGPT for free, and you can also upload a CSV / Excel file! Our new docs: docs.unsloth.ai includes every detail about finetuning with Unsloth AI🦥, which makes LLM finetuning 2x faster & use 70% less memory. 1. Colab to create a custom chatbot: https://lnkd.in/g4tE6aUY 2. Colab to upload CSV / Excel for finetuning: https://lnkd.in/gaMu8tZy 3. Full tutorial to finetune & export to Ollama: https://lnkd.in/gUrm3xi2 Our free OSS package's Github is at https://lnkd.in/dcqhW9Vv with tonnes more notebooks for finetuning Mistral, Gemma, Phi-3, reward modelling like DPO, continued pretraining, text completion and more!
Unsloth AI
Technology, Information and Internet
Sans Fransisco, California 2,054 followers
Making AI accessible for everyone! 🦥
About us
Easily finetune & train LLMs. Get faster with unsloth.
- Website
-
https://unsloth.ai
External link for Unsloth AI
- Industry
- Technology, Information and Internet
- Company size
- 2-10 employees
- Headquarters
- Sans Fransisco, California
- Type
- Privately Held
- Founded
- 2023
- Specialties
- artificial intelligence, ai, llms, language models, and finetuning
Locations
-
Primary
Sans Fransisco, California 94107, US
Employees at Unsloth AI
Updates
-
Unsloth AI reposted this
📚 Check out this fantastic notebook by Daniel Han, the co-creator of Unsloth AI! Discover how to fine-tune Gemma-2-9b using Kaggle notebooks. 👉 Learn more: https://goo.gle/3VZn97S
Kaggle Gemma2 9b Unsloth notebook
kaggle.com
-
Unsloth AI reposted this
We're releasing 2 free Colab/Kaggle notebooks for 2x faster & 63% less memory finetuning for Gemma 2! Gemma 2 is Google's latest free LLM. 27b is trained on 13 trillion tokens, and 9b is distilled on 8T tokens. Please also update Unsloth AI🦥 (we already updated all our notebooks). We also uploaded 4bit pre-quantized base and instruction tuned models for 8x faster downloads here! We also wrote a blog post about our findings! Gemma 2 Colab finetuning notebook: https://lnkd.in/gqTPWEiv Gemma 2 Kaggle finetuning notebook: https://lnkd.in/g-FgEr68 Unsloth's HF repo: huggingface.co/unsloth Github page: https://lnkd.in/gyaDBTxK Our blog: unsloth.ai/blog Gemma 2 9b notebook: https://lnkd.in/gqTPWEiv
Google Colab
colab.research.google.com
-
Unsloth AI reposted this
AutoTrain + Unsloth = 🚀🚀🚀 AutoTrain has now added support for unsloth which means you can use unsloth's optimizations to finetune LLMs super-fast and with much less memory 💥 And all you need to do is set unsloth parameter to true 🤗 P.S. You can use the unsloth param in cli config or --unsloth in pure cli. Currently supported: LLM SFT and LLM Generic 🚀
-
-
Unsloth AI reposted this
We're sharing 2 free Colab notebooks for continued pretraining with QLoRA! Our new 🦥Unsloth AI release allows you to easily continually pretrain LLMs 2x faster and use 50% less VRAM than HF + FA2. Continued pretraining allows LLMs to learn new domain knowledge and out of distribution data. If a LLM wasn't trained on other languages, you can train it on Wikipedia data to make it learn! How about financial data? Or medical data? You sure can train a model to learn about these fields! We've released a free Colab notebook to continually pretrain Mistral v0.3 7b to learn a new language like Korean: https://lnkd.in/gj6e8hff and another Colab for text completion: https://lnkd.in/gNqiFBVd We also show in our blog post https://lnkd.in/g_AZ5qzi on how to select good parameters for continual pretraining and more! We have more notebooks on our Github page: https://lnkd.in/gyaDBTxK
-
-
Unsloth AI reposted this
Unsloth AI🦥 just hit 1 million monthly downloads on Hugging Face today!🥳 We make LLM finetuning 2x faster and use 70% less memory with no accuracy degradation! We have free finetuning notebooks on our Github page for Llama-3, Mistral and Gemma! Inference is also 2x faster! Free notebook to finetune Llama-3: https://lnkd.in/g_JPnvD9 Our Hugging Face page: huggingface.co/unsloth Join our Discord for AI banter, Q&A: https://lnkd.in/g5cQCDnP And star us on Github! https://lnkd.in/gyaDBTxK
-
-
Unsloth AI reposted this
We're releasing two free Colab notebooks for 2x faster & 50% less memory finetuning for Phi-3 & ORPO! Phi-3 is Microsoft's latest mini model distilled from ChatGPT. ORPO combines finetuning and reward modelling into 1 step! Please also update Colab's install instructions when using Unsloth AI🦥 (we already updated all our notebooks) - using the old instructions will make installations fail. ORPO finetuning notebook: https://lnkd.in/g_gfm7KU Phi-3 finetuning notebook: https://lnkd.in/gXdjUyXR Phi-3 Mistral-fied model: https://lnkd.in/gKDM4UGs Unsloth's HF repo: huggingface.co/unsloth Github page: https://lnkd.in/gyaDBTxK Phi-3 Colab notebook:
Google Colaboratory
colab.research.google.com
-
Unsloth AI reposted this
Super appreciate everyone's support for Unsloth AI🦥! We managed to hit 500K monthly HuggingFace model downloads, and we're trending weekly on Github!! Unsloth makes finetuning of LLMs like Llama-3 easier, 2x faster and use 70% less VRAM! We have free notebooks + GPUs for any finetuning job! Github: https://lnkd.in/gyaDBTxK Llama-3 8b 2x faster notebook: https://lnkd.in/gQZRy-Wp Mistral 7b 2x faster notebook: https://lnkd.in/g6ENbnbu Continued Pretraining notebook: https://lnkd.in/gNqiFBVd Conversational notebook: https://lnkd.in/gmyJQJNR DPO Reward Modelling: https://lnkd.in/g45Tnm69
-
-
Unsloth AI reposted this
Unsloth AI🦥 can finetune Llama-3 70b 1.83x faster and use 68% less memory, with 0% accuracy degradation, and Llama-3 8b is 2x faster and uses 63% less memory! Inference / running the model is also 2x faster! Unsloth also supports extremely long context finetuning - 6x longer with tiny overhead, allowing 48K contexts vs 7.5K before. We also wrote a blog about our new release! Blog: unsloth.ai/blog/llama3 Github repo: https://lnkd.in/gyaDBTxK Free 2x faster finetuning notebook: https://lnkd.in/gQZRy-Wp Free Kaggle 30 hours per week GPU notebook: https://lnkd.in/etq4dFCZ Discord Server: discord.gg/u54VK8m8tk
-
-
Unsloth AI reposted this
Are you building Australia's next AI Unicorn? I'm stoked to announce Build Club is launching a 6 week accelerator ending in SF partnered with AWS Startups. We are looking for engineers and researchers who are building at the bleeding edge of AI! You will join a tight knit cohort of 10-15 other 👩🏻💻 technical founders. You will push each other to ship harder, cry, laugh and win together. This accelerator is hardcore, because we are also hardcore about AI. Here's what is in store 👇 - Two day goal setting offsite in hacker cabin in 3h drive from Sydney - 24/7 co-working space + gym, snacks from Sydney Stone & Chalk - 3 week immersion in SF, the beating pulse of AI - $30K+ from AWS Startups, MongoDB, Airwallex, NVIDIA, GTM co-marketing opportunities - Intimate dinners and office hours with top VCs and mentors - Hiring & investor demo days Who are we looking for? We some preselected peers who we have worked with in Build Club, including Daniel Han the founder of Unsloth AI and Micah Hill-Smith the founder of Artificial Analysis. If you think you are on their level ✨.. then come build with us. Huge thank you to our partners: Amazon Web Services (AWS), Aura Ventures, Harken, Airwallex, NVIDIA, MongoDB, Stone & Chalk and our advisors Julia Wells, Logan Kilpatrick, Lauren Capelin, David Booth, Eric Chan, Maxine Minter, Anton Borzov for helping make this come to life. It's truly an ecosystem effort 💖. Grateful to you all! Thank you to Simon Thomsen for the write up in 🗞️ Startup Daily and Jassmyn Goh for the write up in 🗞️ Capital Brief. Have questions? We will be live in Q&A in comments for the next few hours. #startups #ai #accelerate #sf
-