The Technical Context
I watched Matt Pocock's presentation not just as a viewer, but as someone who constantly hits the same wall: AI implementation in development breaks not because of the model, but because of the process. And this is where his set of techniques really hits the mark.
The first thing I noted was using /grill-me before Plan Mode. Essentially, I see this not as a 'secret prompt,' but as a strict validation mode for the task at hand. I often find that code assistants are too eager to agree, and then carry incorrect assumptions down the line.
/grill-me is effective because it forces the AI to argue, find holes, and clarify constraints and edge cases before it starts writing a plan or code. For real-world development, this is cheaper than cleaning up a beautifully formatted but flawed implementation later.
The second strong point that made me nod almost automatically was TDD as a bare minimum. Not as an ideology for its own sake, but as insurance against the model's flights of fancy. If I make the assistant formalize the behavior with a test first, it 'creates' less and adheres better to the contract.
Another useful pattern discussed was /domain-model. The description sounds like a concise summary of the domain model, accumulating knowledge in CONTEXT.md and ADRs. I like this approach for its restraint: no huge DDD shrine, but a record of decisions so the AI's next pass doesn't start with amnesia.
And yes, I wouldn't treat /improve-codebase-architecture as a magic button either. It's more of a way to guide the assistant into an architectural review, but I still wouldn't hand over interface design entirely to a machine. The simpler the interface, the less chance the model will 'optimize' it into a monster.
Impact on Business and Automation
For teams, the practical takeaway here is very down-to-earth. The winners are those who build AI automation around checks, tests, and explicit context. The losers are those who ask it to 'make it pretty' and are then surprised by unnecessary complexity.
The second effect I see with clients is cost savings on rework. When the domain model and architectural decisions are concisely documented, AI integration into the pipeline becomes more stable: fewer repeated explanations, less decision drift between iterations.
And the main conclusion doesn't seem alarming for developers at all. No, this doesn't look like 'AI will replace everyone.' It looks like an amplifier for those who know how to set boundaries, not a replacement for engineering thinking.
If your team is already writing code with Claude, Cursor, or similar tools, but the results vary from excellent to bizarre, I would start with guardrails like these. And if you want to build this into a proper workflow without guesswork, at Nahornyi AI Lab, we help structure AI automation so that the business gets predictable velocity, not another source of chaos.