A product design framework for software that serves both AI agents and human operators as equal first-class users.
Steve Yegge's Software 3.0 describes six survival levers for software products in the age of AI agents. Each lever measures a different way software can position itself to survive agents that can generate inferior alternatives from scratch.
We take it one step further. Yegge's levers describe a straight line: human uses AI, AI uses tool. Our products close the triangle. Human, agent, and product surface all communicate bidirectionally. Native UX and native AX. Shared state. Either user acts on behalf of the other. The loop closes. Velocity compounds.
This is the seventh lever. We call it resonance.
The optimization target is cost per feature — real dollars across all token types and models. The methodology loops measure it, quality gates constrain it, and the system drives it down week over week.
| Week | $/feature | Agents/session | Feat/session | What changed |
|---|
Agent leverage is the leading indicator. Each session deploys more parallel agents. That drives $/feature down while output compounds. The 7-day average is now $--/feature at -- features/day at -- agents/session. The loop measures, tightens, and accelerates.
$/feature at $-- is still falling. 97% of spend is Opus cache reads. The next optimization is model routing: shifting work to Sonnet and Haiku where possible. The floor becomes the ceiling for the next iteration.
Updated 2026-03-23. All costs computed from published API rates across all token types (input, output, cache read, cache create). Quality instrumentation (CI pass rate, revert rate, composite quality score) is the next measurement frontier. All claims verifiable against git history and session telemetry.
A resonant product has three edges. Each connects two of the three participants. Each has distinct design constraints and distinct failure modes.
The product surface that faces the human. Designed to continuously calibrate challenge to the user's ability. Not gamification. The experience itself meets the user an increment above their current level, in a supportive manner, driving development toward equilibrium. At equilibrium: minor oscillation on a permanent frustration-and-reward treadmill at the user's natural max.
The principle draws from Csikszentmihalyi's flow state (the nexus of ability and challenge), Vygotsky's zone of proximal development (always just past what you can do alone, never past what you can do with support), and the Peter Principle inverted and made continuous (not a discrete level but a smooth adjustment).
The product surface that faces the agent. The agent's equivalent of flow is the collapse of its branching factor. A good tool interaction takes the agent's space of possible next actions from many to one. The product's interface matches the agent's prior distribution so closely that interaction becomes recognition, not reasoning.
This is measurable. Five signals, all observable without understanding the mechanism:
The single best metric. Lower means the interface is resonating with the agent's expectations.
Each retry is a failed recognition. The agent expected one thing and got another.
Count "might," "seems," "possibly" in agent output after tool interactions. Hedging means ambiguity.
Tokens between receiving tool output and committing to the next action. Shorter means the path was obvious.
Clear errors are cheap to recover from. Ambiguous partial successes are catastrophic.
The communication between human and agent. This edge is opaque to the product. The human and agent communicate independently through whatever harness they choose (Claude Code, Cursor, a chat window). The product never sees those exchanges.
The design constraint: every state system and interface must serve either user arriving with full authority, without knowing which one is in the chair or what conversation preceded their arrival. Both users are first-class. Both are opaque in motivation.
Four communication paths exist in a resonant system. Two are direct (cheap). Two are relayed (expensive).
| Path | Cost | Why |
|---|---|---|
| Product to human, direct | Low | The human is looking at your surface. |
| Product to AI, direct | Low | The agent reads your interface natively. |
| Product to AI to human | High | Burning tokens to reach a human through agent relay. |
| Product to human to AI | High | Burning human attention to reach an agent through human relay. |
The discipline: do not route through the expensive relay when you can hit the recipient directly. When you must relay, minimize translation cost.
The "opaque third edge" was overstated in the original formulation. In one tuple the product sees the human-AI exchange. In the next it does not. Design for the union of all visibility states.
K2 works. Human and calculator. The entire history of software is K2 resonance. K3 is the topology of the current era (AI agents as new participants), not a prerequisite for the phenomenon.
The framework itself is a ladder. Layer 1 (the framework you are reading) bootstraps the system until there is enough resonance data at scale for the system to teach itself. Layer 2 (formalism, pruned) replaces intuition with measurement. Layer 3 (measure and scale) replaces the framework entirely. Each layer is a ladder for the next. The goal is to stop encoding product management understanding and start scaling the measurement.
We apply this framework to everything we build. Mallcop's build process is the first public dataset: 4,657 commits across 17 repos, 2,116 agent dispatches, 3,075 work items closed, 210 test files added, and 148 findings surfaced with 135 fixed — all in 6 weeks, one person. The flaw rate spikes on adversarial sweep days and declines after fixes land. The curves tell the story.