Unsloth now supports full fine-tuning, 8-bit, and all models! 🦥

🛠️Unsloth Environment Flags

Advanced flags which might be useful if you see breaking finetunes, or you want to turn stuff off.

Environment variable
Purpose

os.environ["UNSLOTH_RETURN_LOGITS"] = "1"

Forcibly returns logits - useful for evaluation if logits are needed.

os.environ["UNSLOTH_COMPILE_DISABLE"] = "1"

Disables auto compiler. Could be useful to debug incorrect finetune results.

os.environ["UNSLOTH_DISABLE_FAST_GENERATION"] = "1"

Disables fast generation for generic models.

os.environ["UNSLOTH_ENABLE_LOGGING"] = "1"

Enables auto compiler logging - useful to see which functions are compiled or not.

os.environ["UNSLOTH_FORCE_FLOAT32"] = "1"

On float16 machines, use float32 and not float16 mixed precision. Useful for Gemma 3.

os.environ["UNSLOTH_STUDIO_DISABLED"] = "1"

Disables extra features.

os.environ["UNSLOTH_COMPILE_DEBUG"] = "1"

Turns on extremely verbose torch.compilelogs.

os.environ["UNSLOTH_COMPILE_MAXIMUM"] = "0"

Enables maximum torch.compileoptimizations - not recommended.

os.environ["UNSLOTH_COMPILE_IGNORE_ERRORS"] = "1"

Can turn this off to enable fullgraph parsing.

os.environ["UNSLOTH_FULLGRAPH"] = "0"

Enable torch.compile fullgraph mode

os.environ["UNSLOTH_DISABLE_AUTO_UPDATES"] = "1"

Forces no updates to unsloth-zoo

Another possiblity is maybe the model uploads we uploaded are corrupted, but unlikely. Try the following:

model, tokenizer = FastVisionModel.from_pretrained(
    "Qwen/Qwen2-VL-7B-Instruct",
    use_exact_model_name = True,
)

Last updated

Was this helpful?