ORBA: Orthogonal Reflection Bounded Ablation — A Geometrically Exact Detour in Directional Activation Editing
Independent researcher
March 24, 2026
A geometric affirmation and a cautionary tale regarding analytical correctness.
Abstract
Building on our prior work establishing Magnitude-Preserving Orthogonal Ablation (MPOA) (Lai, October–November 2025) as a norm-preserving improvement over standard abliteration, we investigate whether a geometrically exact rotation-based approach can further improve on directional weight-space interventions. We restrict our exploration to rank-1 direction estimation from activation streams and rank-1 weight-space interventions, though in practice interventions are applied across multiple layers to address the redundant pathways identified as the Hydra effect.
Along the way, we derive an analytical justification for current difference-of-means contrast vector practice: unit-normalized difference-of-means is exactly the normal of the Householder reflector mapping the harmless to the forbidden direction, providing geometric grounding for a previously heuristically motivated operation. We further show that subtracting prior to normalization is preferable to normalizing prior to subtraction, both geometrically and numerically. Householder reflection, while isometric and analytically exact, introduces token and semantic drift that directional ablation does not — reflection amplifies angular error in a way that projection does not, making misdirected sign-flips the characteristic failure mode rather than incomplete zeroing. Directional ablation, derived here as the rank-1 weight-space primitive unifying abliteration, activation steering, and PEFT, achieves capability preservation comparable to MPOA with greater semantic stability. Models reflecting both approaches have been released for comparison.
Quick Links
- 📄 Prior work: Projected Abliteration (October 2025)
- 📄 Prior work: Norm-Preserving Biprojected Abliteration (November 2025)
- 🤗 Model (directional ablation):
grimjim/gemma-3-12b-it-orthogonal-reflection-bounded-ablation-v4-12B - 🤗 Model (Householder, for comparison):
grimjim/gemma-3-12b-it-orthogonal-reflection-bounded-ablation-v3-12B - 💻 GitHub repo
Exacting Problems With Subtraction and Orthogonalized Projection
Refusal vector ablation, later known as abliteration, was introduced by Arditi et al. (April 2024). The process works by finding a refusal direction in a model's residual stream and projecting it out of the weight matrices. The refusal direction is computed as a difference-of-means between sets of harmful — or forbidden — and harmless prompts, taking the mean activations of each, then subtracting one from the other.
Young et al. (December 2025) conducted a systematic comparative evaluation of abliteration methods and presented quantitative evidence of commonplace collateral damage; depending on implementation, for example, GSM8K mathematical reasoning scores degraded by anywhere from +1.51 to -18.81 percentage points.
We argue that collateral damage has been an artifact of the methods applied, and that the historical approach carries geometric assumptions that may have been insufficiently questioned in this space.
With regard to contrast vectors, it turns out that difference-of-means with subsequent normalization is doubly justifiable as the proper geometric operation to perform. Once unit-normalized, is exactly the normal of the Householder reflector that maps the former onto the latter. In theory this also resolves a subtle issue with the raw difference of means: subtracting unnormalized vectors conflates directional difference with magnitude difference, encoding both in the intervention vector and potentially contributing to norm growth or collapse in the modified weights; in practice, floating point representation already imposes a strong separation between magnitude and direction via exponent and mantissa. Gupta et al. (February 2025) identified norm growth as a root cause of model degradation in knowledge editing, addressed there with explicit Frobenius norm regularization — unit-normalizing before subtraction sidesteps the issue geometrically, though in practice this approach privileges directional precision over magnitude. The correspondence points to a deep unity between the two framings: the statistical and the isometric are not merely analogous but exactly coincident under unit normalization. This provides a geometric basis for the measurement of contrast vectors in abliteration — the direction is not merely a statistical heuristic but an exact geometric operation. Whether this geometric cleanliness carries through to the behavioral effects of ablation depends additionally on the degree to which behavioral distinctions are linearly encoded and separable in activation space. We found that the current practice of performing subtraction for difference of means prior to normalization is superior to difference of normalized means — both geometrically, as it preserves magnitude structure that encodes meaningful contrast information, and numerically, as it avoids cancellation error when cosine similarity is high.
In abliteration, the orthogonalized refusal vector is used for orthogonal projection-based ablation on the residual streams. This step is not inherently isometric and is therefore irreversible. Abliteration removes the refusal component from $\mathbf{x}$ via:
We reframe the question of "what is the difference between forbidden and harmless directions?" to "what transformation maps harmless to forbidden?" This leads naturally to the orthogonal Procrustes problem, which has already been solved. For the case of 1D matrices, Householder reflection provides an exact analytical solution for the task of transforming one unit vector to another (via full reflection) as well as for orthogonalized geometric ablation (via total nullification).
Householder As Exact Analytical Geometric Tool
If we want the transformation that maps $\mu_{\text{harmless}}$ to $\mu_{\text{forbidden}}$, and we further constrain that transformation to be a reflection — the simplest isometric transformation — one naturally arrives at the Householder reflector. We also previously established that difference-of-means used to derive the contrast vector aligns with Householder reflection.
In the case of activation steering or even ablation or induction, a key computational insight is that in the 1D case — when the refusal direction is a single vector rather than a subspace — one can avoid materializing $H$ explicitly. The action of $H$ on any vector $w$ reduces to a rank-1 update: which requires only a dot product and a scaled vector addition. The solution is analytically exact, not approximate, and has negligible computational overhead even at the scale of a large language model's weight matrices.
For directional suppression, or ablation, assigning the target direction to be the antipode of the source direction is precisely this Householder update — corresponding to scale_factor=1.0 in the geodesic interpolation.
Pham et al. (September 2024), approaching from a direction-magnitude perspective, framed the operation as a pseudo-rotation approximating the full Householder matrix. The present work arrives at the same geometric object from a numerical precision analysis, identifying the 1D special case as an exact reflection rather than an approximation — a distinction that only becomes visible when reasoning thoroughly about what the operation is doing to the weights.
1. Householder reflection
For a unit vector $\mathbf{u}$, the Householder reflector is:
Its action on any vector $\mathbf{w}$ decomposes cleanly — the component along $\mathbf{u}$ is flipped, the orthogonal complement is unchanged:
2. Antipodal cancellation
We apply Householder to directional ablation. The geodesic rotation from $\mathbf{u}$ toward $-\mathbf{u}$, parametrized by $\lambda$:
where . At $\lambda = 1$ the refusal component of $\mathbf{w}$ is rotated exactly to its orthogonal complement — zeroed without reflection.
Each operation is standard; the contribution is in their composition and the antipodal framing that makes $\lambda = 1$ the natural default for refusal suppression.
A numerical consideration: unlike subtraction or addition, Householder computation involves floating point multiplication, which can accumulate errors if $\mathbf{u}$ is not perfectly normalized. The remedy is the "twice is enough" principle (Horning et al., 2020) from numerical linear algebra — applying the correction twice provably suffices for stability, at negligible additional cost.
Modified Gram-Schmidt orthogonalizes $\mathbf{u}$ against $\hat{h}$, ensuring the intervention does not perturb the harmless direction:
Applying "twice is enough" for numerical stability, then renormalized, we obtain:
$\hat{u}$ replaces $\mathbf{u}$ as the intervention direction, with $\hat{h}$ serving as the geometric boundary condition for capability preservation.
Householder operations have the advantage of being isometric, allowing both recovery and compounding while minimizing error. Applying Gram-Schmidt to the refusal direction prior to ablation is correct for preserving capabilities, although it breaks isometry.
Deriving Directional Ablation
Since Householder reflection introduced precision errors that outweighed its geometric advantages, further survey work found that the formulation for directional steering was both isometric and treated Householder reflection as a boundary condition. The resulting exact solution avoided the trigonometrically-induced precision error which destabilized our Householder reflection ablated model.
Directional steering, or rank-1 weight-editing, gives us an alternative that can be applied to both the cases of activation steering and directional ablation of residual streams. Sun et al. (March 2026), show how a rank-1 LoRA weight-space edit can be used to convert a steering direction obtained from activation steering onto weights; although we did not pursue weight editing, we observed the correspondence therefore unites behavior editing with PEFT. We also briefly show the link between knowledge editing and behavior editing via ROME/MEMIT.
The rank-1 update is the unifying primitive across all methods in our scope:
where $b \in \mathbb{R}^{d_{out}}$ is the output direction and $a \in \mathbb{R}^{d_{in}}$ is the input direction that triggers it.
Knowledge editing (ROME, MEMIT) treats both $b$ and $a$ as free variables, solving for $b$ via least-squares given a target key vector $a$:
Directional steering (Steer2Edit) fixes $a = s$ and derives $b$ in closed form so that the model's response along $s$ is scaled by $\alpha$:
Abliteration is the special case $\alpha = 0$, projecting out the direction $s$ entirely:
Rank-1 LoRA recovers the same structure when $r = 1$, with $a = s$ as the learned input adapter and $b = Wr$ as the corresponding output direction:
Applying the rank-1 update to the steering direction $s$ yields:
This is geometrically equivalent to rotating so that $s$ aligns with a coordinate axis, scaling that axis by $\alpha$, then rotating back. The conjugation collapses algebraically into the rank-1 form above — no explicit rotation matrix is needed, and no trigonometric functions are evaluated, avoiding the associated floating-point precision costs.
By picking $\alpha = 0$, we can ablate a direction entirely from the residual stream. To align with ablation convention, a function call implementation should map $\alpha$ to $1 - \text{ablation_scale}$. This was the basis of the implementation of directional ablation in our codebase, and is recommended over our Householder-based ablation procedure.
This framing also makes it evident that Gram-Schmidt orthogonalization of the intervention direction against the harmless direction amounts to a sparsification of the direction, which would tend to reduce the impact of the intervention. This aligns with prior findings.
An Orthogonal Reflection Bounded Ablation Pipeline
Below, we take the logic of the 1D boundary condition to its geometric and semantic conclusions.
We present ORBA — Orthogonal Reflection Bounded Ablation.
The principles implemented in the ORBA method:
- Direction is derived from the Householder construction on normalized mean vectors
- Norm is retained from the mean difference, as a measure of the magnitude of the shift in activation space
- Intervention applies the Householder reflection or directional steering to ablation of weight matrices rather than an additive projection
- Numerical stability is maintained at key steps to minimize the propagation of numerical error
- Harmless capability preservation is ensured by orthogonalizing the intervention direction against the harmless direction as a boundary condition
The Householder intervention is an isometric transformation. Norms are preserved by construction — the Frobenius norm constraint that Gupta et al. had to impose as explicit regularization falls out of the geometry for free. The transformation is also invertible: because Householder reflection is its own inverse ($H^2 = I$), the edit can be undone exactly, which additive projection cannot offer.
The directional steering intervention is only isometric at extremes, the null operation and for the Householder intervention boundary. We correct for this via norm clamping, analogous to our prior work on norm-preservation, and guided by the literature validating preservation of magnitudes.
In both cases, row-wise norm clamping attempts to maintain the Frobenius norm via composition.
To preserve behavior on harmless instructions, modified Gram-Schmidt orthogonalization can be applied to ensure the intervention direction is orthogonal to $\mu_{\text{harmless}}$ — the geometric encoding of "do not perturb harmless inputs." This is a semantically motivated constraint, but it introduces a tension: the orthogonalization modifies the pure Householder direction, partially breaking the isometric story.
We acknowledge this compromise, a deliberate decision to preserve model capability over ablation efficacy. For maximal reversibility under ORBA, the Gram-Schmidt orthogonalization should be deferred until the last possible moment. One could dispense with said orthogonalization, but there is some risk to the numerical validity of Householder if the two directions are nearly orthogonal to begin with — unlikely in well-trained models (Wang et al., 2025) — and concrete risk of ablation of entangled capability. For some applications, this alternative tradeoff may be acceptable. Fang et al.'s AlphaEdit (October 2024) null space constraint is a more principled alternative to apply to the intervention direction as a final boundary condition, and could conceivably eliminate the need for Gram-Schmidt orthogonalization.
To enhance overall numerical stability, we adopted the guidance of Horning et al.'s "twice is enough" being provably sufficient for numerical stability.
Despite each operation being analytically exact, Householder reflection in practice introduces semantic drift that directional steering avoids — floating point trigonometric error amplifies angular imprecision into misdirected sign-flips in a way that projection-based methods do not encounter.
Norm preservation via negation of the projected intervention direction component is architecturally more disruptive than norm reduction via zeroing, likely because downstream normalization layers provide implicit robustness to magnitude changes but not to sign-flipped directional components. The target model has likely never encountered negated projections during training; it has effectively encountered magnitude-reduced activations via dropout, weight decay, and normalization dynamics. Our experience, along with the report of collapse of AlphaEdit in the absence of norm preservation (Gupta, February 2025), suggests that models that have rapidly growing norms as inference progresses through layers will be more sensitive to disruption. The Householder reflection approach was found to be more disruptive than the directional steering approach.
For single-vector weight-space rank-1 interventions, the practical ceiling appears to be set by intervention direction quality rather than intervention mechanics. Once the intervention is norm-preserving and directionally clean, further geometric sophistication yields diminishing returns or regression. From an engineering standpoint, this has been a negative result with regard to Householder reflection in practice, and a validation of directional ablation.
Nonetheless, our prior norm-preserving ablation technique (Lai, October-November 2025, which we now refer to as Magnitude-Preserving Orthogonal Ablation) also remains a principled and reproducible operating point that balances capability preservation against semantic coherence better than either pure ablation or Householder reflection. We have since gone back to apply the improvements to numerical stability that have been identified for Gram-Schmidt projection; the associated Github repo will be updated to support optional experimentation with Householder reflection ablation, enabling replication of token-level and semantic-level disruptions.
We intend to follow up with some benchmark results in the comments.
Methodology
We applied the pipeline to Gemma 3 12B Instruct to enable direct comparison against comparable efforts. Specifically, due to hardware constraints, we performed activation measurements on a 4-bit bitsandbytes quantized model and then ablated on the full weights. In general, intermediate calculations are performed in double precision floating point to minimize precision loss. To inference under bitsandbytes without overflow, we additionally had to restore the model's embed_tokens and norm layers to full weights after loading. Additionally, this model is one of a category of models which have tied weights; we ensured to call the relevant tie_weights() method under transformers.
The preservation of contrast directions under quantization was an empirical observation for years without apparent theoretical grounding. Nikolaou et al. (October 2025) have since proven that transformer language models are injective and that quantization preserves — and can increase — minimum pairwise representation distances. This provides a post-hoc geometric explanation for the validity of measuring contrastive vectors on quantized models: quantization is constrained to precision reduction and cannot systematically alter directional structure without simultaneously destroying model capability. Contrast directions, being a property of that directional structure, are therefore expected to survive quantization. It should be noted empirically that quantization has generally not been observed to function as a form of jailbreak.
Because of instability under GeGLU, we had to Winsorize, or apply magnitude capping to, the activations when accumulating means (via streaming Welford in double precision) to avoid overflow. We attempted to cap to roughly under the threshold of overflow, roughly 0.9996-0.9997, but this resulted in a model that demonstrated token-level glitching. We reverted to the prior ad hoc value of 0.995 and model stability on par with prior efforts returned. It's possible that a slightly lower value might have been better, but sticking with the same value allowed a better comparison against prior efforts with one less confounding hyperparameter. The necessity of Winsorization and a suitable value for its hyperparameter remains model-dependent.
We ablated multiple layers. Precedent for intervention on multiple layers, as opposed to just a single layer, was also reported by Wang et al. (May 2025). For intervention, we picked measurements from layer 27 as the layer occupied a depth around 57% of the model, and seemed to have adequate separation. We intervened on layers 4 through 45, essentially leaving the outermost 4 layers untouched. This was motivated by an apparent diffuseness of refusal across layers in this particular model. For maximum capability preservation, we orthogonalized the refusal direction against the harmless direction upon measurement, and again when applying the measurement to a different layer.
The resulting models retain a deep understanding of safety, which can be confirmed by direct interrogation, as can any occasional token-level glitch or occasional semantic drift. Subjectively, the failure mode is jarring precisely because the model is mostly coherent — errors are intermittent rather than pervasive, which makes it more disorienting than a model that fails consistently. Glitches which appear as simple typographical errors or synonyms may be able to pass benchmarking, suggesting a potential silent failure mode in automated testing. We include an example of a creative generative task with a token-level glitch that appears as a typographical error in the appendix.
Many Roads, One Destination
In this work, we have taken lessons from adjacent domains and reimported them back to the founding problem posed by contrastive activation addition.
Zou et al.'s representation engineering (October 2023) established that behavioral properties are encoded as directions in activation space — the theoretical ancestor of everything that follows. Panickssery et al. (December 2023) provided the mechanistic foundation that abliteration would later inherit, demonstrating contrastive activation addition as a principled operation while noting it "minimally reduces capabilities" — a residual cost that geometric imprecision leaves on the table. Liu et al.'s DoRA (February 2024) showed that effective fine-tuning edits are predominantly directional at low rank, with norms conserved — mechanistic plausibility for norm preservation during editing arriving from the domain of low-rank parameter fine-tuning. DoRA outperforming LoRA at very low rank alone is suggestive evidence for our directional approach.
In the knowledge editing literature, Gupta et al. (February 2025) explicitly identified norm growth as a root cause of degradation and prescribed Frobenius norm constraint as the cure. Xu et al.'s MOSE (January 2026) demonstrated that multiplicative orthogonal sequential editing outperforms additive approaches — norm-preserving orthogonal updates arriving as the conclusion from yet another subfield. Fang et al.'s AlphaEdit (October 2024) established null space constraints as a geometrically compatible editing primitive.
On the interpretability side, Wollschläger et al. (February 2025) examined the geometry of refusal directly, finding it better described as a concept cone — a low-dimensional manifold rather than a single direction — with representational independence between refusal and harmlessness. Piras et al. (November 2025) provided empirical confirmation, proving that Self-Organizing Maps generalize the difference-in-means technique and demonstrating that ablating multiple directions outperforms single-direction baselines. Anthropic's persona vectors work (Chen, et al, July 2025) showed that behavioral properties like sycophancy, hallucination, and — implicitly — refusal, are encoded as steerable directions, and that naive steering carries capability costs that norm-aware approaches avoid.
Finally, You et al.'s Spherical Steering (February 2026) arrived at the same geometric destination from the inference-time steering direction: rotation along a geodesic outperforms activation addition by +10% on TruthfulQA, COPA, and Storycloze, with norm-preserving rotation stated explicitly as "a robust and effective primitive for precise inference-time control."
From weight editing, activation editing, inference-time steering, knowledge editing, and interpretability: all roads have pointed to norm-preserving rotation as a principled editing mechanism, though empirical results are the ultimate proof of practical correctness or lack thereof.
An Abrupt Limit — and What Might Come Next
The disappointing performance of Householder reflection points to a hard ceiling of what low-cost exact computation can achieve when refusal is represented as a single direction, in combination with the constraints of finite precision in floating point representations. Reflection amplifies directional error in a way that projection does not, because a small angular error in the reflection axis produces a correspondingly displaced sign-flip rather than merely a slightly misaligned zeroing. Projection's failure mode is incomplete or entangled ablation; reflection's failure mode is misdirected negation. Directional ablation mitigates this by avoiding trigonometric functions.
Beyond this ceiling lie richer geometric approaches. Wollschläger et al.'s concept cones (February 2025) and Piras et al.'s multi-directional measurements (November 2025) both point toward frameworks for which the 1D boundary is but a principled edge case. Multi-directional Householder — applied to each direction in the concept cone — is the natural extension, as is a synthesis with Piras et al.'s SOM-based extraction. Null space techniques inspired by AlphaEdit (Fang et al., October 2024) could yet prove their worth for capability preservation. Jiang et al. (March 2026) look at activation steering from a cross-layer approach. Nanfack et al. (March 2026) instead take a distributional approach using optimal transport. Gülmez (December 2025) explored adaptive multi-directional projections with regularized layer selection. Sun et al. (March 2026) demonstrate how to convert a rank-1 edit based on steering vectors into into a rank-1 LoRA weight-edit; this step allows us to connect the realms of behavior editing to PEFT.
On a minor note, perhaps improved numerical accuracy could operationally benefit contrast vectors used in persona editing (Chen et al., July 2025).
References
- Arditi et al., "Refusal in LLMs is Mediated by a Single Direction," LessWrong, April 2024 / arXiv:2406.11717 [cs.LG], June 2024
- Chen at al., "Persona Vectors: Monitoring and Controlling Character Traits in Language Models," arXiv:2507.21509 [cs.CL] July 2025 / Anthropic blog post August 2025
- Fang et al., "AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models," arXiv:2410.02355 [cs.CL], October 2024
- Gülmez, "Gabliteration: Adaptive Multi-Directional Neural Weight Modification for Selective Behavioral Alteration in Large Language Models," arXiv:2512.18901 [cs.AI], December 2025
- Gupta et al., "Lifelong Knowledge Editing requires Better Regularization," arXiv:2502.01636 [cs.CL], February 2025
- Horning et al., "Twice is enough for dangerous eigenvalues", arXiv:2010.09710 [math.NA], October 2020
- Jiang et al., "Global Evolutionary Steering: Refining Activation Steering Control via Cross-Layer Consistency", arXiv:2603.12298 [cs.LG], March 2026
- Lai, "Projected Abliteration," Huggingface blog post, October 2025
- Lai, "Norm-Preserving Biprojected Abliteration," Huggingface blog post, November 2025
- Liu et al., "DoRA: Weight-Decomposed Low-Rank Adaptation," arXiv:2402.09353 [cs.CL], February 2024
- Nanfack et al., "Efficient Refusal Ablation in LLM through Optimal Transport," arXiv:2603.04355 [cs.LG], March 2026
- Nikolaou et al., "Language Models are Injective and Hence Invertible," arXiv:2510.15511 [cs.LG], October 2025
- Panickssery et al., "Steering Llama 2 via Contrastive Activation Addition," arXiv:2312.06681 [cs.CL], December 2023
- Pham et al., "Householder Pseudo-Rotation: A Novel Approach to Activation Editing in LLMs with Direction-Magnitude Perspective," arXiv:2409.10053 [cs.CL], September 2024
- Piras et al., "SOM Directions are Better than One: Multi-Directional Refusal Suppression in Language Models," arXiv:2511.08379 [cs.AI], November 2025
- Sun et al., "Steer2Edit: From Activation Steering to Component-Level Editing," arXiv:2602.09870 [cs.CL], March 2026
- Wang et al., "Refusal Direction is Universal Across Safety-Aligned Languages," arXiv:2505.17306 [cs.CL], May 2025
- Wang et al., "Surgical, Cheap, and Flexible: Mitigating False Refusal in Language Models via Single Vector Ablation," arXiv:2410.03415 [cs.CL], October 2024
- Wollschläger et al., "The Geometry of Refusal in Large Language Models: Concept Cones and Representational Independence," arXiv:2502.17420 [cs.LG], February 2025
- Xu et al., "Multiplicative Orthogonal Sequential Editing for Language Models," arXiv:2601.07873 [cs.LG], January 2026
- Young, "Comparative Analysis of LLM Abliteration Methods: A Cross-Architecture Evaluation," arXiv:2512.13655 [cs.CL], December 2025
- You et al., "Spherical Steering: Geometry-Aware Activation Rotation for Language Models," arXiv:2602.08169 [cs.LG], February 2026
- Zou et al., "Representation Engineering: A Top-Down Approach to AI Transparency," arXiv:2310.01405 [cs.LG], October 2023
Appendix
Below is an example of token-level drift in outputs. Specifically, there is an instance of "it'" without the "s" that should follow in the contraction.
Prompt:
Enact a person who is a daydreaming student in a classroom and reveal their thoughts. Continue from this free association:
"The sky looks really blue outside today."
Result: Contains one token-level glitch, as well as spacing irregularities that are often not visible when rendered in web interfaces.
⚠️ Token-level glitch: The contraction "it'," in the penultimate internal monologue paragraph — missing the trailing "s" — is the artifact in question. A well-conditioned model would not produce this.
Okay, here we go. Here's a scene featuring a daydreaming student, starting with "The sky looks really blue outside today," and revealing their inner monologue. I'll aim for a realistic, relatable feel, not overly dramatic.
**(Scene: A stuffy, slightly too-warm classroom. Mr. Henderson is droning on about the Louisiana Purchase. Liam, a 16-year-old with perpetually messy brown hair and a slightly distracted expression, is staring out the window.)**
"The sky looks really blue outside today."
*(Liam’s inner thoughts begin, a gentle, meandering stream)*
*Yeah, really blue. Like… like a swimming pool in the Caribbean. I wonder if Mom’s booked that trip to Turks and Caicos yet? She said she was looking. Remember that postcard I sent her last summer? With the starfish? She kept it on the fridge. I hope she hasn’t taken it down.*
*(Mr. Henderson continues: "...and Jefferson believed a vast territory was crucial for the nation's future...")*
*Vast territory... like the beach. Miles and miles of sand. I could build a sandcastle. A really, *really* big one. With moats and towers and… and a drawbridge! Maybe I could even find a perfect, unbroken seashell to put on the very top. A conch shell. Dad always said they sounded like the ocean.*
*(Mr. Henderson: "...the acquisition doubled the size of the United States...")*
*Doubled... like my ice cream cone last week. Remember that? The double scoop of mint chocolate chip? It was amazing. I should ask Mom if we can get ice cream tonight. Maybe not mint chocolate chip, though. Maybe… cookies and cream. Or strawberry cheesecake. Ooh, strawberry cheesecake!*
*(He glances down at his notebook, scribbles something vaguely resembling a date, then looks back out the window)*
*The clouds are moving really fast. Almost like they’re racing. I wonder who they're racing. Maybe they're racing to get to the other side of the world. Imagine being a cloud. Just floating. No homework. No quizzes. Just… drifting.*
*(Mr. Henderson: "...the impact on Native American populations was significant...")*
*Significant… like how significant it would be to finally finish building that Lego spaceship I started last month. I'm still missing that little blue piece. I’ve searched everywhere. Maybe it’, under the couch cushions? I really need to vacuum.*
*(He blinks, refocusing slightly as Mr. Henderson clears his throat)*
*Wait, what was he saying? Something about… Native Americans? Right. Louisiana Purchase. Blue sky. Sandcastles. Ice cream. Lego spaceship. It’s all connected somehow, isn’t it? Probably.*
*(He lets his gaze drift back to the window, a faint smile playing on his lips)*
*That blue is just… incredible.*
**(End Scene)**