gemma-3-12b-it-biprojected-abliterated

This model was derived from google/gemma-3-12b-it.

Projected abliteration has been applied in determining refusal direction, along with a second round of remove of projected contribution onto the harmless direction of layer targeted for intervention, which should further reduce model damage; no subsequent fine-tuning was applied to repair damage. The net result is a model that refuses far less often than the original model, yet still retains awareness of safety and harms.

More details to follow.

Downloads last month
9
Safetensors
Model size
12B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for grimjim/gemma-3-12b-it-biprojected-abliterated

Finetuned
(178)
this model
Quantizations
2 models

Collection including grimjim/gemma-3-12b-it-biprojected-abliterated