After building a few SaaS products, I kept running into the same gap: months later I could still read the code, but I couldn't always remember the product reasoning behind certain behaviors.
Code tells you what the system does, but not always why it behaves that way from a product perspective. PRDs drift, tickets disappear, assumptions get lost. Eventually you end up reverse-engineering your own decisions — or worse, a new engineer joins and nobody can explain why cancellation works the way it does, or what's supposed to happen during a payment grace period.
Today I'm publishing pbc-spec — an open Markdown-first spec for capturing product behavioral promises in a structured, version-controlled format that lives in the repo.
The idea
Write normal Markdown (prose, tables, Mermaid diagrams) for humans, and embed structured pbc:* blocks that tools can parse. One file, two audiences. The dialect chain is CommonMark ⊂ GFM ⊂ PBCM, so every PBC file renders correctly on GitHub, VS Code, and GitLab with zero special tooling.
Quick taste:
```pbc:behavior
id: BIL-BHV-003
name: Cancel subscription
actor: subscriber
```
```pbc:outcomes
- Subscription moves to pending_cancellation state
- User retains access until current billing period ends
- Cancellation confirmation email sent
```
What's in the repo
The spec (v0.6.0-draft). A .pbc.md file is plain markdown with embedded structured blocks — pbc:behavior, pbc:states, pbc:actors, pbc:rules, and more. Readable as raw text in any editor or on GitHub. Parseable by tools when precision matters. The dual-layer model is the core idea: human-readable contract surface, machine-readable semantic anchors.
What a PBC captures: behaviors (trigger/outcome + state transitions + domain events), business rules, state machines, entitlements (plan-by-capability matrix), and provenance.
What it doesn't try to be: not a replacement for OpenAPI (that's the wire contract; PBC is the behavioral contract above it), not a testing framework, not an architecture tool.
The reference CLI. Local tooling in the repo currently supports four commands:
pbc validate— structural validation against the specpbc list— extract and filter contract units (behaviors, states, actors)pbc stats— aggregate stats across PBC filespbc init— scaffold a new.pbc.mdfile
It's intentionally minimal and runs from the repo rather than as a published npm package.
The reference viewer. A browser-based viewer that parses pbc:* blocks and renders them as interactive panels, SVG state diagrams, and live validation results — from the same .pbc.md file that reads cleanly in any Markdown renderer. Try it here.
The billing.pbc.md example in the repo is probably the best way to grok the format — it's a full SaaS billing module with subscriptions, renewals, plan changes, cancellations, and grace periods.
Why open
Most teams have PRDs, tickets, tests, and code. What they lack is a canonical what layer — a shared record of what the product promises to do, grounded in how it actually behaves.
PRD explains why. PBC specifies what. Code and tests prove how.
This layer should be vendor-neutral. Your behavior contracts are plain markdown files you own, version-control, and can read without any tool. The format shouldn't be locked to a product.
Stewie is the product I'm building on top of this foundation — AI-powered analysis, grounding workflows, and living spec synthesis. Open format, proprietary intelligence.
What's next
The spec is a draft. The block set, authoring profile, and versioning story are still evolving. I'm shipping it now because I'd rather build in the open and get feedback than polish in private.
If you're a technical founder who cares about making product behavior explicit — read the spec, try the CLI, open an issue. Curious how others handle product knowledge drift.