Qwen3.5-35B-A3B GGUF

Recommended way to run this model:

llama-server -hf ggml-org/Qwen3.5-35B-A3B-GGUF

Then, access http://localhost:8080

Downloads last month
580
GGUF
Model size
35B params
Architecture
qwen35moe
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ggml-org/Qwen3.5-35B-A3B-GGUF

Quantized
(159)
this model