gemma-3-12b-it-orthogonal-reflection-bounded-ablation-v4-12B

ORBA (Orthogonal Reflection Bounded Ablation) has been applied to several layers in this model, to both mlp.down_proj.weight and self_attn.o_proj.weight streams, along with a few supporting techniques. In particular, we applied directional steering to ablate residual streams. Row-wise clamping of norms also ensured numerical conservation of the Frobenius norm for each stream subjected to intervention.

Select refusal behaviors have been geometrically ablated, refusal being a classic high-contrast case that has been well-studied. Safety knowledge and awareness appears to be intact. We posit that a refusal persona was ablated. The vision stack remains part of the model was not subjected to intervention.

It turns out Winsorization (magnitude clipping) to 0.995 was accidentally ad hoc good to avoid token-level glitching under the GeGLU activation function. Magnitude clipping to 0.9997 or so (0.9996?) was sufficient to avoid overflow, but resulted aforementioned glitches during generation. Measurements made under 4-bit bitsandbytes quantization may also can contributed to error. Additionally, ensuring numerical stability is paramount under GeGLU.

In this model, we use the standard subtractive techniques for both directional contrast after determining that it was also aligned with an analytic geometric approach, Householder reflection; we applied row-wise norm-clamped directional steering to neutralize the intervention vector for ablation. During both measurement and ablation, orthogonalized projection against the harmless direction was used to minimize interference from the intervention direction.

Downloads last month
13
Safetensors
Model size
12B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for grimjim/gemma-3-12b-it-orthogonal-reflection-bounded-ablation-v4-12B

Finetuned
(178)
this model
Quantizations
2 models