OpenAI gpt-oss & all model types now supported!

🐋DeepSeek-V3.1

A guide on how to run DeepSeek-V3.1 on your own local device!

DeepSeek’s V3.1 update introduces hybrid reasoning inference, combining 'think' and 'non-think' into one model. The full 671B parameter model requires 715GB of disk space. The quantized dynamic 2-bit version uses 245GB (-75% reduction in size). GGUF: DeepSeek-V3.1-GGUF

All uploads use Unsloth Dynamic 2.0 for SOTA 5-shot MMLU and KL Divergence performance, meaning you can run & fine-tune quantized DeepSeek LLMs with minimal accuracy loss.

Tutorials navigation:

Run in llama.cppRun in Ollama/Open WebUI

The 2-bit quants will fit in a 1x 24GB GPU (with all layers offloaded). Expect around 7 tokens/s with this setup if you have bonus 128GB RAM as well. It is recommended to have at least 246GB RAM to run this quant. For optimal performance you will need at least 246GB unified memory or 246GB combined RAM+VRAM for 5+ tokens/s. We suggest using our 2.7bit (Q2_K_XL) or 2.4bit (IQ2_XXS) quant to balance size and accuracy.

According to DeepSeek, these are the recommended settings for V3.1 inference:

  • Set the temperature 0.6 to reduce repetition and incoherence.

  • Set top_p to 0.95 (recommended)

  • 128K context length or less

🔢 Chat template/prompt format

You do not need to force <think>\n , but you can still add it in! With the given prefix, DeepSeek V3.1 generates responses to queries in non-thinking mode. Unlike DeepSeek V3, it introduces an additional token </think>.

<|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|></think>

A BOS is forcibly added, and an EOS separates each interaction. To counteract double BOS tokens during inference, you should only call tokenizer.encode(..., add_special_tokens = False) since the chat template auto adds a BOS token as well. For llama.cpp / GGUF inference, you should skip the BOS since it’ll auto add it:

Non-Thinking Mode

First-Turn

Prefix: <|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|></think>

With the given prefix, DeepSeek V3.1 generates responses to queries in non-thinking mode. Unlike DeepSeek V3, it introduces an additional token </think>.

Multi-Turn

Context: <|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>...<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>

Prefix: <|User|>{query}<|Assistant|></think>

By concatenating the context and the prefix, we obtain the correct prompt for the query.

Thinking Mode

First-Turn

Prefix: <|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|><think>

The prefix of thinking mode is similar to DeepSeek-R1.

Multi-Turn

Context: <|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>...<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>

Prefix: <|User|>{query}<|Assistant|><think>

The multi-turn template is the same with non-thinking multi-turn chat template. It means the thinking token in the last turn will be dropped but the </think> is retained in every turn of context.

ToolCall

Toolcall is supported in non-thinking mode. The format is:

<|begin▁of▁sentence|>{system prompt}{tool_description}<|User|>{query}<|Assistant|></think> where the tool_description is.

Run DeepSeek-V3.1 Tutorials:

🦙 Run in Ollama/Open WebUI

1

Install ollama if you haven't already! To run more variants of the model, see here.

apt-get update
apt-get install pciutils -y
curl -fsSL https://ollama.com/install.sh | sh
2

Run the model! Note you can call ollama servein another terminal if it fails! We include all our fixes and suggested parameters (temperature etc) in params in our Hugging Face upload! To run the quants, you need to first merge the 3 GGUF split files into 1 like the code below. Then you will need to run the model locally.

./llama.cpp/llama-gguf-split --merge \
  DeepSeek-V3.1-GGUF/DeepSeek-V3.1-UD-Q2_K_XL/DeepSeek-V3.1-UD-Q2_K_XL-00001-of-00006.gguf \
	merged_file.gguf
OLLAMA_MODELS=unsloth_downloaded_models ollama serve &

ollama run hf.co/unsloth/DeepSeek-V3.1-GGUF:UD_Q2_K_XL
3

Open WebUI also made a step-by-step tutorial on how to run R1 and for V3.1, you will just need to replace R1 with the new V3.1 quant.

✨ Run in llama.cpp

1

Obtain the latest llama.cpp on GitHub here. You can follow the build instructions below as well. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev -y
git clone https://github.com/ggerganov/llama.cpp
cmake llama.cpp -B llama.cpp/build \
    -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split llama-mtmd-cli
cp llama.cpp/build/bin/llama-* llama.cpp
2

If you want to use llama.cpp directly to load models, you can do the below: (:Q2_K_XL) is the quantization type. You can also download via Hugging Face (point 3). This is similar to ollama run . Use export LLAMA_CACHE="folder" to force llama.cpp to save to a specific location. Remember the model has only a maximum of 128K context length.

export LLAMA_CACHE="unsloth/DeepSeek-V3.1-GGUF"
./llama.cpp/llama-cli \
    -hf unsloth/DeepSeek-V3.1-GGUF:Q2_K_XL \
    --cache-type-k q4_0 \
    --threads -1 \
    --n-gpu-layers 99 \
    --prio 3 \
    --temp 0.6 \
    --top_p 0.95 \
    --min_p 0.01 \
    --ctx-size 16384 \
    --seed 3407 \
    -ot ".ffn_.*_exps.=CPU"
3

Download the model via (after installing pip install huggingface_hub hf_transfer ). You can choose UD-Q2_K_XL (dynamic 2bit quant) or other quantized versions like Q4_K_M . We recommend using our 2.7bit dynamic quant UD-Q2_K_XL to balance size and accuracy.

# !pip install huggingface_hub hf_transfer
import os
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "0" # Can sometimes rate limit, so set to 0 to disable
from huggingface_hub import snapshot_download
snapshot_download(
    repo_id = "unsloth/DeepSeek-V3.1-GGUF",
    local_dir = "unsloth/DeepSeek-V3.1-GGUF",
    allow_patterns = ["*UD-Q2_K_XL*"], # Dynamic 2bit (247GB) Use "*UD-Q2_K_XL*" for Dynamic 2bit (251GB)
)
4

Run the model by prompting it. You can edit --threads 32 for the number of CPU threads, --ctx-size 16384 for context length, --n-gpu-layers 2 for GPU offloading on how many layers. Try adjusting it if your GPU goes out of memory. Also remove it if you have CPU only inference.

./llama.cpp/llama-cli \
    --model unsloth/DeepSeek-V3.1-GGUF/UD-Q2_K_XL/DeepSeek-V3.1-UD-Q2_K_XL-00001-of-00006.gguf \
    --cache-type-k q4_0 \
    --threads -1 \
    --n-gpu-layers 99 \
    --prio 3 \
    --temp 0.6 \
    --top_p 0.95 \
    --min_p 0.01 \
    --ctx-size 16384 \
    --seed 3407 \
    -ot ".ffn_.*_exps.=CPU" \
    -no-cnv \
    --prompt "<|User|>Create a Flappy Bird game in Python. You must include these things:\n1. You must use pygame.\n2. The background color should be randomly chosen and is a light shade. Start with a light blue color.\n3. Pressing SPACE multiple times will accelerate the bird.\n4. The bird's shape should be randomly chosen as a square, circle or triangle. The color should be randomly chosen as a dark color.\n5. Place on the bottom some land colored as dark brown or yellow chosen randomly.\n6. Make a score shown on the top right side. Increment if you pass pipes and don't hit them.\n7. Make randomly spaced pipes with enough space. Color them randomly as dark green or light brown or a dark gray shade.\n8. When you lose, show the best score. Make the text inside the screen. Pressing q or Esc will quit the game. Restarting is pressing SPACE again.\nThe final game should be inside a markdown section in Python. Check your code for errors and fix them before the final markdown section.<|Assistant|>"
Full example prompt to run the model
./llama.cpp/llama-cli \
    --model unsloth/DeepSeek-V3.1-GGUF/UD-IQ1_S/DeepSeek-V3.1-UD-IQ1_S-00001-of-00004.gguf \
    --cache-type-k q4_0 \
    --threads -1 \
    --n-gpu-layers 99 \
    --prio 3 \
    --temp 0.6 \
    --top_p 0.95 \
    --min_p 0.01 \
    --ctx-size 16384 \
    --seed 3407 \
    -ot ".ffn_.*_exps.=CPU" \
    -no-cnv \
    --prompt "<|User|>Write a Python program that shows 20 balls bouncing inside a spinning heptagon:\n- All balls have the same radius.\n- All balls have a number on it from 1 to 20.\n- All balls drop from the heptagon center when starting.\n- Colors are: #f8b862, #f6ad49, #f39800, #f08300, #ec6d51, #ee7948, #ed6d3d, #ec6800, #ec6800, #ee7800, #eb6238, #ea5506, #ea5506, #eb6101, #e49e61, #e45e32, #e17b34, #dd7a56, #db8449, #d66a35\n- The balls should be affected by gravity and friction, and they must bounce off the rotating walls realistically. There should also be collisions between balls.\n- The material of all the balls determines that their impact bounce height will not exceed the radius of the heptagon, but higher than ball radius.\n- All balls rotate with friction, the numbers on the ball can be used to indicate the spin of the ball.\n- The heptagon is spinning around its center, and the speed of spinning is 360 degrees per 5 seconds.\n- The heptagon size should be large enough to contain all the balls.\n- Do not use the pygame library; implement collision detection algorithms and collision response etc. by yourself. The following Python libraries are allowed: tkinter, math, numpy, dataclasses, typing, sys.\n- All codes should be put in a single Python file.<|Assistant|>"

Model uploads

ALL our uploads - including those that are not imatrix-based or dynamic, utilize our calibration dataset, which is specifically optimized for conversational, coding, and language tasks.

  • Full DeepSeek-V3.1 model uploads below:

We also uploaded IQ4_NL and Q4_1 quants which run specifically faster for ARM and Apple devices respectively.

MoE Bits
Type + Link
Disk Size
Details

2.42bit

216GB

2.5/2.06bit

2.71bit

251GB

3.5/2.5bit

3.12bit

273GB

3.5/2.06bit

3.5bit

296GB

4.5/3.5bit

4.5bit

384GB

5.5/4.5bit

5.5bit

481GB

6.5/5.5bit

We've also uploaded versions in BF16 format, and original FP8 (float8) format.

Last updated

Was this helpful?