OpenAI gpt-oss & all model types now supported!

🐋DeepSeek-V3.1: How to Run Locally

A guide on how to run DeepSeek-V3.1 on your own local device!

DeepSeek’s V3.1 update introduces hybrid reasoning inference, combining 'think' and 'non-think' into one model. The full 671B parameter model requires 715GB of disk space. The quantized dynamic 2-bit version uses 245GB (-75% reduction in size). GGUF: DeepSeek-V3.1-GGUF

All uploads use Unsloth Dynamic 2.0 for SOTA 5-shot MMLU and KL Divergence performance, meaning you can run & fine-tune quantized DeepSeek LLMs with minimal accuracy loss.

Tutorials navigation:

Run in llama.cppRun in Ollama/Open WebUI

The 1-bit dynamic quant TQ1_0 (1bit for unimportant MoE layers, 2-4bit for important MoE, and 6-8bit for rest) uses 170GB of disk space - this works well in a 1x24GB card and 128GB of RAM with MoE offloading - it also works natively in Ollama!

You must use --jinja for llama.cpp quants - this uses our fixed chat templates and enables the correct template! You might get incorrect results if you do not use --jinja

The 2-bit quants will fit in a 1x 24GB GPU (with MoE layers offloaded to RAM). Expect around 5 tokens/s with this setup if you have bonus 128GB RAM as well. It is recommended to have at least 226GB RAM to run this 2-bit. For optimal performance you will need at least 226GB unified memory or 226GB combined RAM+VRAM for 5+ tokens/s. To learn how to increase generation speed and fit longer contexts, read here.

🦋Chat template bug fixes

We fixed a few issues with DeepSeek V3.1's chat template since they did not function correctly in llama.cpp and other engines:

  1. DeepSeek V3.1 is a hybrid reasoning model, meaning you can change the chat template to enable reasoning. The chat template introduced thinking = True , but other models use enable_thinking = True . We added the option to use enable_thinking as a keyword instead.

  2. llama.cpp's jinja renderer via minja does not allow the use of extra arguments in the .split() command, so using .split(text, 1) works in Python, but not in minja. We had to change this to make llama.cpp function correctly without erroring out. You will get the following error when using other quants: terminate called after throwing an instance of 'std::runtime_error' what(): split method must have between 1 and 1 positional arguments and between 0 and 0 keyword arguments at row 3, column 1908 We fixed it in all our quants!

According to DeepSeek, these are the recommended settings for V3.1 inference:

  • Set the temperature 0.6 to reduce repetition and incoherence.

  • Set top_p to 0.95 (recommended)

  • 128K context length or less

  • Use --jinja for llama.cpp variants - we fixed some chat template issues as well!

  • Use enable_thinking = True to use reasoning/ thinking mode. By default it's set to non reasoning.

🔢 Chat template/prompt format

You do not need to force <think>\n , but you can still add it in! With the given prefix, DeepSeek V3.1 generates responses to queries in non-thinking mode. Unlike DeepSeek V3, it introduces an additional token </think>.

<|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|></think>

A BOS is forcibly added, and an EOS separates each interaction. To counteract double BOS tokens during inference, you should only call tokenizer.encode(..., add_special_tokens = False) since the chat template auto adds a BOS token as well. For llama.cpp / GGUF inference, you should skip the BOS since it’ll auto add it.

📔 Non-Thinking Mode (use thinking = Falseor enable_thinking = False and is by default)

First-Turn

Prefix: <|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|></think>

With the given prefix, DeepSeek V3.1 generates responses to queries in non-thinking mode. Unlike DeepSeek V3, it introduces an additional token </think>.

Multi-Turn

Context: <|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>...<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>

Prefix: <|User|>{query}<|Assistant|></think>

By concatenating the context and the prefix, we obtain the correct prompt for the query.

📚 Thinking Mode (use thinking = Trueor enable_thinking = True and is by default)

First-Turn

Prefix: <|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|><think>

The prefix of thinking mode is similar to DeepSeek-R1.

Multi-Turn

Context: <|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>...<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>

Prefix: <|User|>{query}<|Assistant|><think>

The multi-turn template is the same with non-thinking multi-turn chat template. It means the thinking token in the last turn will be dropped but the </think> is retained in every turn of context.

🏹 Tool Calling

Tool calling is supported in non-thinking mode. The format is:

<|begin▁of▁sentence|>{system prompt}{tool_description}<|User|>{query}<|Assistant|></think> where we populate the tool_description is area after the system prompt.

▶️Run DeepSeek-V3.1 Tutorials:

🦙 Run in Ollama/Open WebUI

1

Install ollama if you haven't already! To run more variants of the model, see here.

apt-get update
apt-get install pciutils -y
curl -fsSL https://ollama.com/install.sh | sh
2

Run the model! Note you can call ollama servein another terminal if it fails! We include all our fixes and suggested parameters (temperature etc) in params in our Hugging Face upload! (NEW) To run the full R1-0528 model in Ollama, you can use our TQ1_0 (170GB quant):

OLLAMA_MODELS=unsloth ollama serve &

OLLAMA_MODELS=unsloth ollama run hf.co/unsloth/DeepSeek-V3.1-GGUF:TQ1_0
3

To run other quants, you need to first merge the GGUF split files into 1 like the code below. Then you will need to run the model locally.

./llama.cpp/llama-gguf-split --merge \
  DeepSeek-V3.1-GGUF/DeepSeek-V3.1-UD-Q2_K_XL/DeepSeek-V3.1-UD-Q2_K_XL-00001-of-00006.gguf \
	merged_file.gguf
OLLAMA_MODELS=unsloth ollama serve &

OLLAMA_MODELS=unsloth ollama run merged_file.gguf
4

Open WebUI also made a step-by-step tutorial on how to run R1 and for V3.1, you will just need to replace R1 with the new V3.1 quant.

✨ Run in llama.cpp

1

Obtain the latest llama.cpp on GitHub here. You can follow the build instructions below as well. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggerganov/llama.cpp
cmake llama.cpp -B llama.cpp/build \
    -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split llama-mtmd-cli llama-server
cp llama.cpp/build/bin/llama-* llama.cpp
2

If you want to use llama.cpp directly to load models, you can do the below: (:Q2_K_XL) is the quantization type. You can also download via Hugging Face (point 3). This is similar to ollama run . Use export LLAMA_CACHE="folder" to force llama.cpp to save to a specific location. Remember the model has only a maximum of 128K context length.

export LLAMA_CACHE="unsloth/DeepSeek-V3.1-GGUF"
./llama.cpp/llama-cli \
    -hf unsloth/DeepSeek-V3.1-GGUF:Q2_K_XL \
    --cache-type-k q4_0 \
    --jinja \
    --n-gpu-layers 99 \
    --temp 0.6 \
    --top_p 0.95 \
    --min_p 0.01 \
    --ctx-size 16384 \
    --seed 3407 \
    -ot ".ffn_.*_exps.=CPU"
3

Download the model via (after installing pip install huggingface_hub hf_transfer ). You can choose UD-Q2_K_XL (dynamic 2bit quant) or other quantized versions like Q4_K_M . We recommend using our 2.7bit dynamic quant UD-Q2_K_XL to balance size and accuracy.

# !pip install huggingface_hub hf_transfer
import os
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "0" # Can sometimes rate limit, so set to 0 to disable
from huggingface_hub import snapshot_download
snapshot_download(
    repo_id = "unsloth/DeepSeek-V3.1-GGUF",
    local_dir = "unsloth/DeepSeek-V3.1-GGUF",
    allow_patterns = ["*UD-Q2_K_XL*"], # Dynamic 2bit Use "*UD-TQ1_0*" for Dynamic 1bit
)
4

You can edit --threads 32 for the number of CPU threads, --ctx-size 16384 for context length, --n-gpu-layers 2 for GPU offloading on how many layers. Try adjusting it if your GPU goes out of memory. Also remove it if you have CPU only inference.

./llama.cpp/llama-cli \
    --model unsloth/DeepSeek-V3.1-GGUF/UD-Q2_K_XL/DeepSeek-V3.1-UD-Q2_K_XL-00001-of-00006.gguf \
    --cache-type-k q4_0 \
    --jinja \
    --threads -1 \
    --n-gpu-layers 99 \
    --temp 0.6 \
    --top_p 0.95 \
    --min_p 0.01 \
    --ctx-size 16384 \
    --seed 3407 \
    -ot ".ffn_.*_exps.=CPU"
5

Get the 1bit version (170GB) if you don't have enough combined RAM and VRAM:

from huggingface_hub import snapshot_download
snapshot_download(
    repo_id = "unsloth/DeepSeek-V3.1-GGUF",
    local_dir = "unsloth/DeepSeek-V3.1-GGUF",
    allow_patterns = ["*UD-TQ1_0*"], # Use "*UD-Q2_K_XL*" for Dynamic 2bit
)

✨ Deploy with llama-server and OpenAI's completion library

To use llama-server for deployment, use the following command:

./llama.cpp/llama-server \
    --model unsloth/DeepSeek-V3.1-GGUF/DeepSeek-V3.1-UD-TQ1_0.gguf \
    --alias "unsloth/DeepSeek-V3.1" \
    --threads -1 \
    --n-gpu-layers 999 \
    -ot ".ffn_.*_exps.=CPU" \
    --prio 3 \
    --min_p 0.01 \
    --ctx-size 16384 \
    --port 8001 \
    --jinja

Then use OpenAI's Python library after pip install openai :

from openai import OpenAI
import json
openai_client = OpenAI(
    base_url = "http://127.0.0.1:8001/v1",
    api_key = "sk-no-key-required",
)
completion = openai_client.chat.completions.create(
    model = "unsloth/DeepSeek-V3.1",
    messages = [{"role": "user", "content": "What is 2+2?"},],
)
print(completion.choices[0].message.content)

💽Model uploads

ALL our uploads - including those that are not imatrix-based or dynamic, utilize our calibration dataset, which is specifically optimized for conversational, coding, and language tasks.

  • Full DeepSeek-V3.1 model uploads below:

We also uploaded IQ4_NL and Q4_1 quants which run specifically faster for ARM and Apple devices respectively.

MoE Bits
Type + Link
Disk Size
Details

1.66bit

170GB

1.92/1.56bit

1.78bit

185GB

2.06/1.56bit

1.93bit

200GB

2.5/2.06/1.56

2.42bit

216GB

2.5/2.06bit

2.71bit

251GB

3.5/2.5bit

3.12bit

273GB

3.5/2.06bit

3.5bit

296GB

4.5/3.5bit

4.5bit

384GB

5.5/4.5bit

5.5bit

481GB

6.5/5.5bit

We've also uploaded versions in BF16 format, and original FP8 (float8) format.

🏂 Improving generation speed

If you have more VRAM, you can try offloading more MoE layers, or offloading whole layers themselves.

Normally, -ot ".ffn_.*_exps.=CPU" offloads all MoE layers to the CPU! This effectively allows you to fit all non MoE layers on 1 GPU, improving generation speeds. You can customize the regex expression to fit more layers if you have more GPU capacity.

If you have a bit more GPU memory, try -ot ".ffn_(up|down)_exps.=CPU" This offloads up and down projection MoE layers.

Try -ot ".ffn_(up)_exps.=CPU" if you have even more GPU memory. This offloads only up projection MoE layers.

You can also customize the regex, for example -ot "\.(6|7|8|9|[0-9][0-9]|[0-9][0-9][0-9])\.ffn_(gate|up|down)_exps.=CPU" means to offload gate, up and down MoE layers but only from the 6th layer onwards.

The latest llama.cpp release also introduces high throughput mode. Use llama-parallel. Read more about it here. You can also quantize the KV cache to 4bits for example to reduce VRAM / RAM movement, which can also make the generation process faster.

📐How to fit long context (full 128K)

To fit longer context, you can use KV cache quantization to quantize the K and V caches to lower bits. This can also increase generation speed due to reduced RAM / VRAM data movement. The allowed options for K quantization (default is f16) include the below.

--cache-type-k f32, f16, bf16, q8_0, q4_0, q4_1, iq4_nl, q5_0, q5_1

You should use the _1 variants for somewhat increased accuracy, albeit it's slightly slower. For eg q4_1, q5_1

You can also quantize the V cache, but you will need to compile llama.cpp with Flash Attention support via -DGGML_CUDA_FA_ALL_QUANTS=ON, and use --flash-attn to enable it. Then you can use together with --cache-type-k :

--cache-type-v f32, f16, bf16, q8_0, q4_0, q4_1, iq4_nl, q5_0, q5_1

Last updated

Was this helpful?