🔮All Our Models
See the table below for all Dynamic GGUF, 4-bit, 16-bit uploaded models on Hugging Face.
GGUFs can be used to run in your favorite places like Ollama, Open WebUI and llama.cpp.
4-bit and 16-bit models can be used for inference serving or fine-tuning.
Here's a table of all our GGUF + 4-bit model uploads:
Model
GGUF
Instruct (4-bit)
Base (4-bit)
Last updated
Was this helpful?