Vibe coding works. If you've used it seriously, you know this. You describe a feature, the model drafts it, you push it. Things that used to take a day take an hour. You ship faster than you ever have.
The problem isn't the speed. The problem is what gets left behind.
What "left behind" means
When you build normally — even scrappily — the reasoning behind a decision usually ends up somewhere. In a comment, a commit message, a Slack thread, a ticket. Imperfect, but traceable.
When you vibe code, you're in a flow state. You describe what you want, the model gives you something close, you adjust, you ship. The gap between intention and implementation is so small that writing it down feels redundant.
Weeks later, you have a product that works but you can't fully explain why it works the way it does.
Not the code — you can read the code. The reasoning. Why does the cancellation flow work like this? Why is the grace period five days and not seven? Why does the export fail silently on empty results instead of returning an error?
The code remembers what you built. It doesn't remember why.
Why this compounds with AI coding agents
The original vibe coding problem — reasoning drift — is bad on its own. AI coding agents make it worse in a specific way.
Agents don't just read your code. They make inferences about intent. When you ask Claude Code or Codex to refactor a module, they read the existing implementation and decide what behavior to preserve and what to change. Without explicit constraints, that decision is partly guesswork.
The agent isn't being careless. It's doing exactly what you asked. But it's filling in missing context with reasonable-looking assumptions — and reasonable assumptions about code aren't the same as correct assumptions about product behavior.
The result: a refactor that passes tests and breaks something real. An "improvement" that changes behavior your users depend on. A simplification that removes an edge case that was load-bearing.
You can't blame the agent for this. The rules were never written down.
The missing layer
There's a layer between "what the code does" and "what the product promises" that most teams never formalize. PRDs describe intent. Tests verify implementation. Neither one captures the behavioral contract — the durable record of what your product guarantees and why.
For teams that have been building for years, this layer lives in accumulated memory. For vibe-coded products, it often doesn't exist at all.
That's the gap. And the longer you wait to address it, the more expensive it gets.
What to do about it
You don't need to slow down to fix this. You need a format that fits the way you actually work.
That's what Product Behavior Contracts are — a lightweight Markdown spec for capturing what your product promises to do. Not a PRD. Not tests. Not Gherkin. Just the smallest artifact that makes product reasoning explicit and version-controlled.
The pattern is:
PRD explains why. PBC specifies what. Code and tests prove how.
A .pbc.md file sits in your repo. It documents behaviors, rules, edge cases, and the decisions behind them — in plain Markdown your whole team can read. It's structured enough that tools can parse it, lightweight enough that you'll actually maintain it.
The goal isn't to slow down the vibe. It's to leave something behind for future you — and for every agent that touches your codebase next.
Start small
You don't need to spec the whole product at once. Start with the module that would cause the most damage if an agent got it wrong. Billing. Auth. Entitlements.
Write down three things for each behavior: what must happen, what must not happen, and the edge cases that matter. That's it. That's your behavior contract.
Vibe coding got you here. A behavior contract is how you stay here.
The PBC spec is open source at github.com/stewie-sh/pbc-spec. Stewie is the product being built on top of it — join the beta waitlist if you want early access.