🌠Qwen3-2507
Run Qwen3-30B-A3B-2507 and 235B-A22B Thinking and Instruct versions locally on your device!
Qwen released 2507 (July 2025) updates for their Qwen3 30B and 235B models, introducing both "thinking" and "non-thinking" variants. The non-thinking 'Qwen3-30B-A3B-Instruct-2507' and 'Qwen3-235B-A22B-Instruct-2507' features a 256K context window, improved instruction following, multilingual capabilities and alignment.
The thinking models 'Qwen3-30B-A3B-Thinking-2507' and 'Qwen3-235B-A22B-Thinking-2507' excel at reasoning, with the 235B achieving SOTA results in logic, math, science, coding, and advanced academic tasks.
Unsloth also now supports fine-tuning and Reinforcement Learning (RL) of Qwen3-2507 models — 2x faster, with 70% less VRAM, and 8x longer context lengths
Unsloth Dynamic 2.0 GGUFs:
⚙️Best Practices
The settings for the Thinking and Instruct model are different. The thinking model uses temperature = 0.6, but the instruct model uses temperature = 0.7 The thinking model uses top_p = 0.95, but the instruct model uses top_p = 0.8
To achieve optimal performance, Qwen recommends these settings:
Temperature = 0.7
Temperature = 0.6
Min_P = 0.00
(llama.cpp's default is 0.1)
Min_P = 0.00
(llama.cpp's default is 0.1)
Top_P = 0.80
Top_P = 0.95
TopK = 20
TopK = 20
presence_penalty = 0.0 to 2.0
(llama.cpp default turns it off, but to reduce repetitions, you can use this)
presence_penalty = 0.0 to 2.0
(llama.cpp default turns it off, but to reduce repetitions, you can use this)
Adequate Output Length: Use an output length of 32,768
tokens for most queries, which is adequate for most queries.
Chat template for both Thinking (thinking has <think></think>
) and Instruct is below:
<|im_start|>user
Hey there!<|im_end|>
<|im_start|>assistant
What is 1+1?<|im_end|>
<|im_start|>user
2<|im_end|>
<|im_start|>assistant
📖 Run Qwen3-30B-A3B-2507 Tutorials
Below are guides for the Thinking and Instruct versions of the model.
Instruct: Qwen3-30B-A3B-Instruct-2507
Given that this is a non thinking model, there is no need to set thinking=False
and the model does not generate <think> </think>
blocks.
⚙️Best Practices
To achieve optimal performance, Qwen recommends the following settings:
We suggest using
temperature=0.7, top_p=0.8, top_k=20, and min_p=0.0
presence_penalty
between 0 and 2 if the framework supports to reduce endless repetitions.temperature = 0.7
top_k = 20
min_p = 0.00
(llama.cpp's default is 0.1)top_p = 0.80
presence_penalty = 0.0 to 2.0
(llama.cpp default turns it off, but to reduce repetitions, you can use this) Try 1.0 for example.Supports up to
262,144
context natively but you can set it to32,768
tokens for less RAM use
🦙 Ollama: Run Qwen3-30B-A3B-Instruct-2507 Tutorial
Install
ollama
if you haven't already! You can only run models up to 32B in size.
apt-get update
apt-get install pciutils -y
curl -fsSL https://ollama.com/install.sh | sh
Run the model! Note you can call
ollama serve
in another terminal if it fails! We include all our fixes and suggested parameters (temperature etc) inparams
in our Hugging Face upload!
ollama run hf.co/unsloth/Qwen3-30B-A3B-Instruct-2507-GGUF:UD-Q4_K_XL
✨ Llama.cpp: Run Qwen3-30B-A3B-Instruct-2507 Tutorial
Obtain the latest
llama.cpp
on GitHub here. You can follow the build instructions below as well. Change-DGGML_CUDA=ON
to-DGGML_CUDA=OFF
if you don't have a GPU or just want CPU inference.
apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggml-org/llama.cpp
cmake llama.cpp -B llama.cpp/build \
-DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-gguf-split
cp llama.cpp/build/bin/llama-* llama.cpp
You can directly pull from HuggingFace via:
./llama.cpp/llama-cli \ -hf unsloth/Qwen3-30B-A3B-Instruct-2507-GGUF:Q4_K_XL \ --jinja -ngl 99 --threads -1 --ctx-size 32684 \ --temp 0.7 --min-p 0.0 --top-p 0.80 --top-k 20 --presence-penalty 1.0
Download the model via (after installing
pip install huggingface_hub hf_transfer
). You can choose UD_Q4_K_XL or other quantized versions.
# !pip install huggingface_hub hf_transfer
import os
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
from huggingface_hub import snapshot_download
snapshot_download(
repo_id = "unsloth/Qwen3-30B-A3B-Instruct-2507-GGUF",
local_dir = "unsloth/Qwen3-30B-A3B-Instruct-2507-GGUF",
allow_patterns = ["*UD-Q4_K_XL*"],
)
Thinking: Qwen3-30B-A3B-Thinking-2507
This model supports only thinking mode and a 256K context window natively. The default chat template adds <think>
automatically, so you may see only a closing </think>
tag in the output.
⚙️Best Practices
To achieve optimal performance, Qwen recommends the following settings:
We suggest using
temperature=0.6, top_p=0.95, top_k=20, and min_p=0.0
presence_penalty
between 0 and 2 if the framework supports to reduce endless repetitions.temperature = 0.6
top_k = 20
min_p = 0.00
(llama.cpp's default is 0.1)top_p = 0.95
presence_penalty = 0.0 to 2.0
(llama.cpp default turns it off, but to reduce repetitions, you can use this) Try 1.0 for example.Supports up to
262,144
context natively but you can set it to32,768
tokens for less RAM use
🦙 Ollama: Run Qwen3-30B-A3B-Instruct-2507 Tutorial
Install
ollama
if you haven't already! You can only run models up to 32B in size. To run the full 235B-A22B models, see here.
apt-get update
apt-get install pciutils -y
curl -fsSL https://ollama.com/install.sh | sh
Run the model! Note you can call
ollama serve
in another terminal if it fails! We include all our fixes and suggested parameters (temperature etc) inparams
in our Hugging Face upload!
ollama run hf.co/unsloth/Qwen3-30B-A3B-Thinking-2507-GGUF:UD-Q4_K_XL
✨ Llama.cpp: Run Qwen3-30B-A3B-Instruct-2507 Tutorial
Obtain the latest
llama.cpp
on GitHub here. You can follow the build instructions below as well. Change-DGGML_CUDA=ON
to-DGGML_CUDA=OFF
if you don't have a GPU or just want CPU inference.
apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggml-org/llama.cpp
cmake llama.cpp -B llama.cpp/build \
-DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-gguf-split
cp llama.cpp/build/bin/llama-* llama.cpp
You can directly pull from Hugging Face via:
./llama.cpp/llama-cli \ -hf unsloth/Qwen3-30B-A3B-Thinking-2507-GGUF:Q4_K_XL \ --jinja -ngl 99 --threads -1 --ctx-size 32684 \ --temp 0.6 --min-p 0.0 --top-p 0.95 --top-k 20 --presence-penalty 1.0
Download the model via (after installing
pip install huggingface_hub hf_transfer
). You can choose UD_Q4_K_XL or other quantized versions.
# !pip install huggingface_hub hf_transfer
import os
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
from huggingface_hub import snapshot_download
snapshot_download(
repo_id = "unsloth/Qwen3-30B-A3B-Thinking-2507-GGUF",
local_dir = "unsloth/Qwen3-30B-A3B-Thinking-2507-GGUF",
allow_patterns = ["*UD-Q4_K_XL*"],
)
📖 Run Qwen3-235B-A22B-2507 Tutorials
Below are guides for the Thinking and Instruct versions of the model.
Thinking: Qwen3-235B-A22B-Thinking-2507
This model supports only thinking mode and a 256K context window natively. The default chat template adds <think>
automatically, so you may see only a closing </think>
tag in the output.
⚙️ Best Practices
To achieve optimal performance, Qwen recommends these settings for the Thinking model:
temperature = 0.6
top_k = 20
min_p = 0.00
(llama.cpp's default is 0.1)top_p = 0.95
presence_penalty = 0.0 to 2.0
(llama.cpp default turns it off, but to reduce repetitions, you can use this) Try 1.0 for example.Adequate Output Length: Use an output length of
32,768
tokens for most queries, which is adequate for most queries.
✨Run Qwen3-235B-A22B-Thinking via llama.cpp:
For Qwen3-235B-A22B, we will specifically use Llama.cpp for optimized inference and a plethora of options.
If you want a full precision unquantized version, use our Q8_K_XL, Q8_0
or BF16
versions!
Obtain the latest
llama.cpp
on GitHub here. You can follow the build instructions below as well. Change-DGGML_CUDA=ON
to-DGGML_CUDA=OFF
if you don't have a GPU or just want CPU inference.apt-get update apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y git clone https://github.com/ggml-org/llama.cpp cmake llama.cpp -B llama.cpp/build \ -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-gguf-split cp llama.cpp/build/bin/llama-* llama.cpp
You can directly use llama.cpp to download the model but I normally suggest using
huggingface_hub
To use llama.cpp directly, do:./llama.cpp/llama-cli \ -hf unsloth/Qwen3-235B-A22B-Thinking-2507-GGUF:Q2_K_XL \ --threads -1 \ --ctx-size 16384 \ --n-gpu-layers 99 \ -ot ".ffn_.*_exps.=CPU" \ --temp 0.6 \ --min-p 0.0 \ --top-p 0.95 \ --top-k 20 \ --presence-penalty 1.0
Download the model via (after installing
pip install huggingface_hub hf_transfer
). You can choose UD-Q2_K_XL, or other quantized versions..# !pip install huggingface_hub hf_transfer import os os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "0" # Can sometimes rate limit, so set to 0 to disable from huggingface_hub import snapshot_download snapshot_download( repo_id = "unsloth/Qwen3-235B-A22B-Thinking-2507-GGUF", local_dir = "unsloth/Qwen3-235B-A22B-Thinking-2507-GGUF", allow_patterns = ["*UD-Q2_K_XL*"], )
Run the model and try any prompt.
Edit
--threads -1
for the number of CPU threads,--ctx-size
262114 for context length,--n-gpu-layers 99
for GPU offloading on how many layers. Try adjusting it if your GPU goes out of memory. Also remove it if you have CPU only inference.
Use -ot ".ffn_.*_exps.=CPU"
to offload all MoE layers to the CPU! This effectively allows you to fit all non MoE layers on 1 GPU, improving generation speeds. You can customize the regex expression to fit more layers if you have more GPU capacity.
./llama.cpp/llama-cli \
--model unsloth/Qwen3-235B-A22B-Thinking-2507-GGUF/UD-Q2_K_XL/Qwen3-235B-A22B-Thinking-2507-UD-Q2_K_XL-00001-of-00002.gguf \
--threads -1 \
--ctx-size 16384 \
--n-gpu-layers 99 \
-ot ".ffn_.*_exps.=CPU" \
--seed 3407 \
--temp 0.6 \
--min-p 0.0 \
--top-p 0.95 \
--top-k 20
--presence-penalty 1.0
Instruct: Qwen3-235B-A22B-Instruct-2507
Given that this is a non thinking model, there is no need to set thinking=False
and the model does not generate <think> </think>
blocks.
⚙️Best Practices
To achieve optimal performance, we recommend the following settings:
1. Sampling Parameters: We suggest using temperature=0.7, top_p=0.8, top_k=20, and min_p=0.
presence_penalty
between 0 and 2 if the framework supports to reduce endless repetitions.
2. Adequate Output Length: We recommend using an output length of 16,384
tokens for most queries, which is adequate for instruct models.
3. Standardize Output Format: We recommend using prompts to standardize model outputs when benchmarking.
Math Problems: Include
Please reason step by step, and put your final answer within \boxed{}.
in the prompt.Multiple-Choice Questions: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C".
✨Run Qwen3-235B-A22B-Instruct via llama.cpp:
For Qwen3-235B-A22B, we will specifically use Llama.cpp for optimized inference and a plethora of options.
If you want a full precision unquantized version, use our Q8_K_XL, Q8_0
or BF16
versions!
Obtain the latest
llama.cpp
on GitHub here. You can follow the build instructions below as well. Change-DGGML_CUDA=ON
to-DGGML_CUDA=OFF
if you don't have a GPU or just want CPU inference.apt-get update apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y git clone https://github.com/ggml-org/llama.cpp cmake llama.cpp -B llama.cpp/build \ -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-gguf-split cp llama.cpp/build/bin/llama-* llama.cpp
You can directly use llama.cpp to download the model but I normally suggest using
huggingface_hub
To use llama.cpp directly, do:./llama.cpp/llama-cli \ -hf unsloth/Qwen3-235B-A22B-Instruct-2507-GGUF:Q2_K_XL \ --threads -1 \ --ctx-size 16384 \ --n-gpu-layers 99 \ -ot ".ffn_.*_exps.=CPU" \ --temp 0.7 \ --min-p 0.0 \ --top-p 0.8 \ --top-k 20 \ --repeat-penalty 1.0
Download the model via (after installing
pip install huggingface_hub hf_transfer
). You can choose UD-Q2_K_XL, or other quantized versions..# !pip install huggingface_hub hf_transfer import os os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "0" # Can sometimes rate limit, so set to 0 to disable from huggingface_hub import snapshot_download snapshot_download( repo_id = "unsloth/Qwen3-235B-A22B-Instruct-2507-GGUF", local_dir = "unsloth/Qwen3-235B-A22B-Instruct-2507-GGUF", allow_patterns = ["*UD-Q2_K_XL*"], )
Run the model and try any prompt.
Edit
--threads -1
for the number of CPU threads,--ctx-size
262114 for context length,--n-gpu-layers 99
for GPU offloading on how many layers. Try adjusting it if your GPU goes out of memory. Also remove it if you have CPU only inference.
Use -ot ".ffn_.*_exps.=CPU"
to offload all MoE layers to the CPU! This effectively allows you to fit all non MoE layers on 1 GPU, improving generation speeds. You can customize the regex expression to fit more layers if you have more GPU capacity.
./llama.cpp/llama-cli \
--model unsloth/Qwen3-235B-A22B-Instruct-2507-GGUF/UD-Q2_K_XL/Qwen3-235B-A22B-Instruct-2507-UD-Q2_K_XL-00001-of-00002.gguf \
--threads -1 \
--ctx-size 16384 \
--n-gpu-layers 99 \
-ot ".ffn_.*_exps.=CPU" \
--temp 0.7 \
--min-p 0.0 \
--top-p 0.8 \
--top-k 20
🛠️ Improving generation speed
If you have more VRAM, you can try offloading more MoE layers, or offloading whole layers themselves.
Normally, -ot ".ffn_.*_exps.=CPU"
offloads all MoE layers to the CPU! This effectively allows you to fit all non MoE layers on 1 GPU, improving generation speeds. You can customize the regex expression to fit more layers if you have more GPU capacity.
If you have a bit more GPU memory, try -ot ".ffn_(up|down)_exps.=CPU"
This offloads up and down projection MoE layers.
Try -ot ".ffn_(up)_exps.=CPU"
if you have even more GPU memory. This offloads only up projection MoE layers.
You can also customize the regex, for example -ot "\.(6|7|8|9|[0-9][0-9]|[0-9][0-9][0-9])\.ffn_(gate|up|down)_exps.=CPU"
means to offload gate, up and down MoE layers but only from the 6th layer onwards.
The latest llama.cpp release also introduces high throughput mode. Use llama-parallel
. Read more about it here. You can also quantize the KV cache to 4bits for example to reduce VRAM / RAM movement, which can also make the generation process faster. The next section talks about KV cache quantization.
📐How to fit long context
To fit longer context, you can use KV cache quantization to quantize the K and V caches to lower bits. This can also increase generation speed due to reduced RAM / VRAM data movement. The allowed options for K quantization (default is f16
) include the below.
--cache-type-k f32, f16, bf16, q8_0, q4_0, q4_1, iq4_nl, q5_0, q5_1
You should use the _1
variants for somewhat increased accuracy, albeit it's slightly slower. For eg q4_1, q5_1
So try out --cache-type-k q4_1
You can also quantize the V cache, but you will need to compile llama.cpp with Flash Attention support via -DGGML_CUDA_FA_ALL_QUANTS=ON
, and use --flash-attn
to enable it. After installing Flash Attention, you can then use --cache-type-v q4_1
🦥 Fine-tuning Qwen3-2507 with Unsloth
Unsloth makes Qwen3 and Qwen3-2507 fine-tuning 2x faster, use 70% less VRAM and supports 8x longer context lengths. Because Qwen3-2507 was only released in a 30B variant, this means you will need about a 40GB A100 GPU to fine-tune the model using QLoRA (4-bit).
For a notebook, because the model cannot fit in Colab's free 16GB GPUs, you will need to utilize a 40GB A100. You can utilize our Conversational notebook but replace the dataset to any of your using. This time you do not need to combined reasoning in your dataset as the model has no reasoning.
If you have an old version of Unsloth and/or are fine-tuning locally, install the latest version of Unsloth:
pip install --upgrade --force-reinstall --no-cache-dir unsloth unsloth_zoo
Qwen3-2507 MOE models fine-tuning
Fine-tuning support includes MOE models: 30B-A3B and 235B-A22B. Qwen3-30B-A3B works on 30GB VRAM with Unsloth. On fine-tuning MoE's - it's probably not a good idea to fine-tune the router layer so we disabled it by default.
The 30B-A3B fits in 30GB VRAM, but you may lack RAM or disk space since the full 16-bit model must be downloaded and converted to 4-bit on the fly for QLoRA fine-tuning. This is due to issues importing 4-bit BnB MOE models directly. This only affects MOE models.
If you're fine-tuning the MOE models, please use FastModel
and not FastLanguageModel
from unsloth import FastModel
import torch
model, tokenizer = FastModel.from_pretrained(
model_name = "unsloth/Qwen3-30B-A3B-Instruct-2507",
max_seq_length = 2048, # Choose any for long context!
load_in_4bit = True, # 4 bit quantization to reduce memory
load_in_8bit = False, # [NEW!] A bit more accurate, uses 2x memory
full_finetuning = False, # [NEW!] We have full finetuning now!
# token = "hf_...", # use one if using gated models
)

Last updated