Technical context: what exactly "agentic" approaches offer
I carefully reviewed the workshop program, and I like that the focus is not on "AI writing code," but on the engineering discipline surrounding agentic loops. Agentic Coding is when an agent doesn't just generate code snippets; it plans, executes steps, calls tools, tracks progress, and verifies itself in iterative cycles.
The tool list features universal agents like Claude Code and OpenCode. I view them not as "yet another IDE plugin," but as execution engines living alongside the repository and CI/CD. They read the project structure, create branches, open PRs, run tests, triage issues, and leave comments.
I want to specifically highlight the section on CLAUDE.md and Agent Spec Driven Development. Specifying agent behavior directly within the repository acts as a practical "contract": defining the goal, tools, constraints, Definition of Done, and sub-agent roles. Whenever I design the architecture for AI solutions, this specification becomes the anchor that outlasts model updates and team changes.
The combination of Skills / MCP vs CLI is all about tool interfaces. MCP servers provide standardized access to systems (repos, task trackers, CRMs, browsers, internal APIs) and allow for fine-grained permission control. The CLI approach is easier to bootstrap but often scales poorly in terms of security and observability.
Finally, there's the Asymmetry of Verification—a concept that saves real-world teams more money than simply having a "smarter model." Generating takes time, but verifying is fast. The agent does the draft work, while a human or a verifier-agent validates the diffs, tests, contracts, and risks.
Business impact and automation: who wins and who starts losing
I can see that in 2026, Agentic Coding shifts the market from "developer acceleration" to AI automation across entire development lifecycles. Companies with many repeatable engineering operations will benefit the most: monorepo maintenance, bug triage, infrastructure support, routine dependency PRs, documentation generation, and test stabilization.
Those who try to layer agents "on top of chaos" will start losing. If you lack a clear Definition of Done, testing environments, branching strategies, and a defined access model, agency will quickly turn into a generator of technical debt and incidents.
On Nahornyi AI Lab projects, I almost always start not by selecting a model, but by designing the control loop: MCP permissions, sandboxes, default read-only policies, action logging, network call limits, and mandatory CI checks. This is what AI architecture means in an applied sense—not diagrams for the sake of diagrams, but an engine for controlled execution.
Another business effect: the role structure is shifting. I increasingly recommend that clients appoint a "verification engineer" (or lead reviewer) responsible for building fast checks: tests, linters, contract assertions, and security checklists. When this is in place, verification asymmetry becomes your competitive advantage rather than just an elegant idea.
Strategic vision: agency becomes the standard, but only through specifications
My forecast is simple: over the next 12–18 months, agentic pipelines will normalize, just as CI and Infrastructure as Code did in the past. However, the agents that make it to production won't be the "most autonomous" ones; they will be the ones best wrapped in specifications and constraints.
I already observe a recurring pattern in AI adoption: first, a team buys a tool; then, they realize the agent doesn't respect boundaries; and only after incidents occur does discipline emerge—specifications, permissions, observability, and role segregation. Agent Spec Driven Development allows you to skip this painful phase if you make it part of the process from day one.
In the practice of Nahornyi AI Lab, I link agent specifications to a "production contract": which artifacts the agent is allowed to create (PRs, migrations, configs), which actions are forbidden without human approval, and which quality metrics are mandatory (test coverage, SAST, secret scanning). This elevates AI development from an experimental mode to a manageable system.
If you want to achieve "AI automation" in development, I recommend starting with a single loop—for example, issue triage or automated dependency update PRs—and immediately designing MCP permissions and verifier cycles. Once that is established, scaling to feature development and refactoring becomes exponentially faster.
This analytical brief was prepared by Vadym Nahornyi — lead practitioner at Nahornyi AI Lab, specializing in AI architecture, AI integration, and development automation with agents. I step in when you need to go beyond "playing with an agent" and instead build a secure, measurable, and profitable operational loop. Contact me—we will analyze your development process, select the right agentic scenarios, and put together a comprehensive adoption roadmap.