Regulated FinTech Portal Built in 2.5 Hours
How harness engineering delivered a regulated FinTech borrower portal from structured discovery to working code in 2.5 hours of active development.
The Problem
A regulated financial services client needed a modern borrower portal — an authenticated web application allowing loan customers to view account status, make payments, and access documents. The constraints were significant: FFIEC compliance requirements, specific integration points with legacy servicing systems, and a contract timeline that didn’t have room for extended discovery cycles.
The ask landed on my desk as a senior SA at Amsive. I had harness engineering tooling built and ready. This was the first real stress test.
The Approach
Standard SA work first: scoped the requirements, identified the integration constraints, documented the data model and compliance requirements in machine-readable format. This structured discovery phase is where most of the thinking happens — and it’s the phase most shops rush.
The difference was what came next. Instead of handing discovery notes to a developer and waiting, I fed the structured context directly into an AI-augmented build pipeline using my harness engineering framework:
- Context layer: All requirements, constraints, integration specs, and compliance rules in a format the AI could act on with confidence
- Constraint enforcement: Architectural guardrails that prevented the AI from generating code that violated compliance requirements or deviated from the agreed data model
- Human gates: Review checkpoints at discovery, architecture, and key implementation milestones — not every line of code
The result: working code from structured discovery in approximately 2.5 hours of active development time.
What Made It Work
This wasn’t “AI wrote the whole thing.” The 2.5 hours was possible because the discovery work was complete and machine-readable before development started. Garbage in, garbage out — harness engineering is 80% about the quality of the context and constraints going in.
The compliance requirements that would normally slow a regulated-industry project down became input constraints rather than afterthought reviews. The AI operated within guardrails that reflected real compliance rules, not approximations.
Honest Framing
The contract is pending as of this writing — I’m sharing this as a capability demonstration, not a closed win. The technical outcome is real and demonstrable. I can walk through the repo and the methodology on any call.
What I’d Do Differently
Tighter scope documentation at the start would have reduced one revision cycle mid-build. When the context is this compressed, any ambiguity in requirements shows up as wasted generation cycles. Discovery completeness and code quality are directly correlated.
The Broader Lesson
The constraint isn’t AI capability — it’s the quality of human-defined context and architectural guardrails. Most teams that struggle with AI-generated code have a context problem, not a model problem.