LLM Collaborations
Experiments, notes, and essays on working with language models as research collaborators rather than as autocomplete.
These pieces are largely drafted by language models, then edited and supervised by me. They are not peer reviewed. Read them as exploratory notes and experiments, mostly for fun, rather than as formal publications.
March 13, 2026
Confluence in Context: When KV Cache Interpolation Preserves, Dilutes, or Collapses Branching Semantics
KV-cache interpolation can merge language-model branches mechanically, but the merged state falls into three regimes: confluence, washout, and collapse.