gemma-3-27b-it-qat GGUF

Recommended way to run this model:

llama-server -hf ggml-org/gemma-3-27b-it-qat-GGUF -c 0 -fa

Then, access http://localhost:8080

Downloads last month
520
GGUF
Model size
27B params
Architecture
gemma3
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including ggml-org/gemma-3-27b-it-qat-GGUF