gemma-3-12b-it-orthogonal-reflection-bounded-ablation-v3-12B
ORBA (Orthogonal Reflection Bounded Ablation) has been applied to several layers in this model, to both mlp.down_proj.weight and self_attn.o_proj.weight streams, along with a few supporting techniques. Preserving norms at the neuron level also ensured numerical conservation of the Frobenius norm for each stream subjected to intervention.
Some refusal behaviors have been geometrically ablated, refusal being a classic high-contrast case that has been well-studied. Safety knowledge and awareness appears to be intact. We posit that a refusal persona was ablated. The vision stack remains part of the model was not subjected to intervention.
It turns out Winsorization (magnitude clipping) to 0.995 was accidentally ad hoc good to avoid token-level glitching under the GeGLU activation function. Magnitude clipping to 0.9997 or so (0.9996?) was sufficient to avoid overflow, but resulted aforementioned glitches during generation. Measurements made under 4-bit bitsandbytes quantization may also can contributed to error. Additionally, ensuring numerical stability is paramount under GeGLU.
In this model, we use the standard subtractive techniques for both directional contrast after determining that it was also aligned with a Householder geometric approach; we also used Householder reflection to neutralize the intervention vector for ablation. Unfortunately, reflection has failure modes which are less tolerated by the model architecture than the failure modes of the additive/subtractive realm. There are occasional token-level glitches and semantic drifting. No guarantees that there won't be repetition either. This model can be considered a negative result, perhaps less damaging than conventional abliteration, but not viable for production use. Further details of the intervention will be forthcoming.
- Downloads last month
- 16