👋Welcome

New to Unsloth? Start here!

Unsloth makes finetuning large language models like Llama-3, Mistral, Phi-3 and Gemma 2x faster, use 70% less memory, and with no degradation in accuracy! Our docs will help you navigate through training your very own custom model. It covers the essentials of creating datasets, running and deploying your model. You'll also learn how to integrate third-party tools, use tools like Google Colab and more!

📒Unsloth Notebooks📚All Our Models📂Saving & Using Models📥Install / Update🦙How to Finetune Llama-3 and Export to Ollama

What is finetuning and why?

If we want a language model to learn a new skill, a new language, some new programming language, or simply want the language model to learn how to follow and answer instructions like how ChatGPT functions, we do finetuning!

Finetuning is the process of updating the actual "brains" of the language model through some process called back-propagation. But, finetuning can get very slow and very resource intensive.

How to use Unsloth?

Our open-source version of Unsloth can be installed locally or another GPU service like Google Colab. Most use Unsloth through the interface Google Colab which provides a free GPU to train with. You can access all of our notebooks here.

Last updated