Lenses, Frames, and Stances

Lenses, to see with. Frames, to build with. Stances, to find.

Disclaimer: This is a placeholder written by Claude based on my notes from over 2 years. I will, or won’t, get around to cleaning this up.

Cognitive technology: Lenses, to see with. Frames, to build with. Stances, to find.

The short version

  • Lens describes how and what you see. A lens filters what you notice vs dismiss. You don’t choose what’s in the territory, but you’re always choosing (often unconsciously) which transformation function you apply to it.
  • Frame describes how you build. Given what you’ve noticed, a frame is the approach you use to construct understanding, solutions, or decisions. Frames can be bottom-up, top-down, methodology-driven, or vibes-based.
  • Stance describes how you react. A stance is the set of reactions and reaction-seeds that are “on the surface” in a given context. Partly environmental (you’re tired, you’re hungry), partly internal (your habitual emotional wiring toward certain situations).

Why not just “maps”?

Rationalists talk about maps and territory. I prefer lenses over maps because lenses foreground the transformation function rather than the output. Brains build maps automatically. Lenses are the thing worth introspecting on. A map is the other side of the lens — how the territory appears to you after transformation. The lens is where you have leverage.

Examples

A compsci-brained person receiving a spec often defaults to a problem-finding lens: where are the edge cases, what could break. Useful when reviewing code. Actively harmful when a client is trying to tell you what they actually want — where a motivation lens (what are they looking for, what does this spec tell me about their goal) serves better.

For frames: an EA reading AI papers could frame their analysis around argument credibility, first-principles empirics, ML technicalities, theory of mind, or evolutionary biology intuitions. Each produces a different model. None is complete alone.

For stances: when your code doesn’t compile, you might find yourself in a frustrated stance or a curious stance. Same trigger, different reaction surface. When receiving relationship feedback, you might default to an unconfident stance (“obviously the other person knows better”) that causes you to internalize criticism you’d rationally disagree with. Or you could shift into an angry stance, sometimes healthy, sometimes lashing out in ways you do not endorse upon reflection.

Why this matters

People switch between lenses, frames, and stances automatically most of the time. This is fine for routine situations. It’s insufficient when you’re trying to build genuinely novel pieces of worldmodel, or when your default stance is actively working against you.

Intentional shifting between lenses, frames, and stances requires a baseline of introspective capacity: noticing which tool you’re currently applying and whether it fits the job. The payoff is that you stop being trapped in whatever cognitive posture you happened to arrive in.

Some lenses, frames, and stances are just bad. I don’t have a clean method for evaluating this beyond noticing which ones you naturally stop reaching for over time.