👋Welcome

New to Unsloth?

Unsloth makes finetuning large language models like Llama-3, Mistral, Phi-4 and Gemma 2x faster, use 70% less memory, and with no degradation in accuracy! Our docs will guide you through training your own custom model. It covers the essentials of installing & updating Unsloth, creating datasets, running & deploying your model. You'll also learn how to integrate third-party tools, use tools like Google Colab and more!

Beginner? Start here!📒Unsloth Notebooks🔮All Our Models🦙Tutorial: How to Finetune Llama-3 and Use In Ollama

What is finetuning and why?

If we want a language model to learn a new skill, a new language, some new programming language, or simply want the language model to learn how to follow and answer instructions like how ChatGPT functions, we do finetuning! Learn more:

🤔FAQ + Is Fine-tuning Right For Me?

Finetuning is the process of updating the actual "brains" of the language model through some process called back-propagation.

How to use Unsloth?

Unsloth can be installed locally via Linux, Windows (via WSL), Kaggle, or another GPU service like Google Colab. Most use Unsloth through the interface Google Colab which provides a free GPU to train with.

📥Installing + Updating🛠️Unsloth Requirements

Last updated