Download Qwen-3 Models - MoE, Dense & Quantized Versions

Download the Qwen-3 model series via Ollama, HuggingFace, or ModelScope, including MoE, Dense, and their quantized versions.

Download via Hugging Face

Download Qwen-3 model weights from the Hugging Face Hub, supporting the Transformers library and Git LFS.

Qwen3-235B-A22B
Flagship MoE model (22B activated)
Qwen3-30B-A3B
Efficient MoE model (3B activated)
Qwen3-32B
Large Dense model (128K context)
Qwen3-14B
Medium-Large Dense model (128K context)
Qwen3-8B
Medium Dense model (128K context)
Qwen3-4B
Small Dense model (32K context)
Qwen3-1.7B
Extra-Small Dense model (32K context)
Qwen3-0.6B
Extra-Small Dense model (32K context)
Qwen3 Model Collection
Visit the official Qwen collection page for links to all models.
  • All MoE and Dense models
  • Includes Base and Instruct (fine-tuned) versions
  • Provides GGUF, AWQ, etc. quantization
  • Apache 2.0 License
Visit Qwen3 Collection
Installation & Usage

Using Transformers Library (Recommended)

Requires `transformers>=4.51.0`. Load the model using `AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-...")`

from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-8B", ...)

Using Git LFS

After installing Git LFS (`git lfs install`), use `git clone https://huggingface.co/Qwen/Qwen3-...` to download the full repository.

git clone https://huggingface.co/Qwen/Qwen3-8B