AMD

Fine-tune with Unsloth on AMD GPUs.

Unsloth supports Radeon RX, MI300X's (192GB) GPUs and more.

1

Make a new isolated environment (Optional)

To not break any system packages, you can make an isolated pip environment. Reminder to check what Python version you have! It might be pip3, pip3.13, python3, python.3.13 etc.

apt install python3.10-venv python3.11-venv python3.12-venv python3.13-venv -y

python -m venv unsloth_env
source unsloth_env/bin/activate
2

Install PyTorch

Install the latest PyTorch, TorchAO, Xformers from https://pytorch.org/

pip install torch==2.8.0 torchvision torchaudio torchao==0.13.0 xformers --index-url https://download.pytorch.org/whl/rocm6.4
3

Install Unsloth

Install Unsloth's dedicated AMD branch

pip install --no-deps unsloth unsloth-zoo
pip install --no-deps git+https://github.com/unslothai/unsloth-zoo.git
pip install "unsloth[amd] @ git+https://github.com/unslothai/unsloth"

And that's it! Try some examples in our Unsloth Notebooks page! For example using our gpt-oss RL auto win 2048 on a MI300X (192GB) GPU:

Troubleshooting

As of October 2025, bitsandbytes in AMD is still unstable - you might get HSA_STATUS_ERROR_EXCEPTION: An HSAIL operation resulted in a hardware exception errors. We disabled bitsandbytes internally in Unsloth automatically until a fix is found for versions 0.48.2.dev0 and below. This means load_in_4bit = True will instead use 16bit LoRA. Full finetuning also works via full_finetuning = True .

To force 4bit, you need to specify the actual model name like unsloth/gemma-3-4b-it-unsloth-bnb-4bit and set use_exact_model_name = True as an extra argument within FastLanguageModel.from_pretrained etc.

Last updated

Was this helpful?