Welcome
New to Unsloth?
Last updated
Was this helpful?
New to Unsloth?
Last updated
Was this helpful?
Unsloth makes finetuning large language models like Llama-3, Mistral, Phi-4 and Gemma 2x faster, use 70% less memory, and with no degradation in accuracy! Our docs will guide you through training your own custom model. It covers the essentials of installing & updating Unsloth, creating datasets, running & deploying your model.
Fine-tuning an LLM customizes its behavior, enhances domain knowledge, and optimizes performance for specific tasks. Finetuning is the process of updating the actual "brains" of the language model through some process called back-propagation.
By fine-tuning a pre-trained model (e.g. Llama-3.1-8B) on a specialized dataset, you can:
Update Knowledge: Introduce new domain-specific information.
Customize Behavior: Adjust the modelโs tone, personality, or response style.
Optimize for Tasks: Improve accuracy and relevance for specific use cases.
Unsloth can be installed locally via Linux, Windows (via WSL), Kaggle, or another GPU service like Google Colab. Most use Unsloth through the interface Google Colab which provides a free GPU to train with.