Harness Engineering: The Framework That Actually Makes AI Development Work
Most AI dev teams have a context problem, not a model problem. Here's the three-pillar harness engineering framework I built to fix that.
The Problem AI Development Has
Here’s a pattern I see constantly: a developer uses an AI coding assistant, gets code that looks reasonable, ships it, and then spends three days debugging something the AI got subtly wrong because it didn’t know about a critical constraint buried in a Notion page nobody updated in six months.
The AI didn’t fail. The context did.
Most teams treat AI coding tools as smart autocomplete. Point them at a file, ask them to write a function, accept the output. This works for trivial tasks and breaks down exactly when you need it most — on complex, multi-component work where constraints, conventions, and architectural decisions matter.
What Harness Engineering Is
Harness engineering is a three-pillar framework for AI-augmented development. The name comes from the idea that a harness constrains, guides, and amplifies — which is exactly what structured context does for AI output.
Pillar 1: Context Engineering
Everything an AI needs to make good decisions must be explicit, current, and machine-readable. Not in a wiki that’s three months out of date. Not spread across Slack threads and tribal knowledge. In a structured format that’s part of the repo and gets updated when the code changes.
This means AGENTS.md files that describe the system, docs/architecture.md that reflects what was actually built, and conventions files that capture the “why” behind decisions — not just the “what.”
Pillar 2: Architectural Constraints
AI models are very good at generating code that works in isolation and bad at knowing which patterns are forbidden in your specific codebase. Constraints solve this.
Constraints are mechanically enforced rules: layer boundaries that prevent cross-cutting imports, naming conventions validated by linters, structural tests that fail when the code drifts from the intended architecture. When constraints are mechanical rather than documented, AI output stays within bounds automatically.
Pillar 3: Entropy Management
Codebases drift. Docs go stale. Conventions get forgotten. AI-augmented development accelerates this drift because the volume of generated code is higher.
Entropy management is the discipline of keeping the harness current: running periodic audits to catch documentation drift, flagging dead code and inconsistent naming, ensuring constraints still reflect the current architecture. Without this, the context that makes AI output trustworthy degrades over time.
What It Delivers
The 1st Franklin Financial borrower portal — a regulated FinTech application — went from structured discovery to working code in approximately 2.5 hours of active development time. That’s not a magic AI trick. That’s what happens when the context is complete, the constraints are enforced, and the discovery work is done right before a line of code is written.
The methodology also lets less-senior developers execute against AI-generated specs at a senior delivery bar. When the constraints are mechanical, a developer doesn’t need to carry all the architectural knowledge in their head — the harness enforces it.
Who This Is For
Teams where AI coding tools are producing code that works in demos and breaks in production. Teams where “AI-generated” has become a code review concern rather than a productivity multiplier. Teams shipping faster than their documentation and constraints can keep up with.
If you’re evaluating whether this approach fits your team, I’m happy to talk through it.