Technical Context
I read such statements carefully—not as hype, but as markers of tool maturity. Boris Cherny (Claude Code lead at Anthropic) formulated his thesis bluntly in a YC Lightcone interview: for him, coding is "practically solved" today, and by the end of 2026, this will be the norm "for everyone, regardless of domain." Crucially: he didn't say "programmers will disappear"; he said the value of manually typing code as the primary job is vanishing.
As an architect, what catches my attention aren't the words, but the supporting facts around Claude Code. Public estimates link the tool to roughly 4% of public GitHub commits, with rapid growth (DAU doubling monthly). Inside Anthropic, Cherny describes a mode where he makes 22–27 pull requests a day, with code shipping without manual edits—powered by an Opus 4.5-level model and an agentic workflow. This isn't a "chatbot suggesting a function," but a terminal agent that orchestrates changes, bundles them into commits, navigates the project, and pushes to PR.
I also note a critical detail: sources reveal almost no technical "magic" (formal verification, new RL loops, or proven correctness guarantees). The forecast relies on the exponential improvement of models and the agentic organization of work: where LLMs used to generate fragments, the agent now handles the chain of "find — modify — verify — finalize." Architecturally, this means the bottleneck shifts from code generation to managing context, constraints, and validation.
And one more nuance I emphasize to clients: even if "coding is solved," it doesn't mean "engineering is solved." Software production still involves dependencies, migrations, security, observability, responsibility for changes, and the cost of errors. AI simply drastically accelerates the part that used to be the most time-expensive—implementation.
Business & Automation Impact
If we take Cherny's prediction seriously, the winners won't be companies with the "strongest programmers," but those with the fastest cycle: specification → verifiable change → release. In my practice at Nahornyi AI Lab, this cycle is what usually stalls growth: requirements are vague, tests are scarce, environment access is chaotic, and "done" isn't measured. An AI agent in such an environment offers no magic—it just accelerates the chaos.
I see three direct effects on development organization and AI automation:
- The developer's role shifts towards "Specification Engineer." Real value lies in setting constraints, choosing interfaces, defining acceptance criteria, and describing not just the happy path, but failure modes. If an engineer cannot formalize requirements, the agent will generate something "plausibly working"—the most expensive class of errors.
- QA and Security become part of AI architecture, not separate departments. When PRs are created by the dozen daily, manual code review stops scaling. I build automatic checks into AI solutions: linters, SAST/DAST, policy-as-code, secret scanning, contract tests, smoke sets, plus "limiters" on agent actions (what can be changed, where it can deploy, which commands are banned).
- Integration cost drops, error cost rises. Writing a component is easy, but integrating it into the landscape (data, permissions, audit, SLA) remains hard. Therefore, demand shifts towards those who know how to execute AI implementation and change processes, not just "connect a model."
Who loses? Teams where engineering culture runs on heroism and "we'll fix it tonight." Agentic tools make speed too high to save with nightly fixes. Who wins? Those with discipline: CI/CD, a testing pyramid, observability, strict access rights, and proper product documentation.
A separate note on business functions. Cherny is right that PMs and designers will be able to "code" more. But I don't see this replacing developers—I see it as expanding the surface area of change. Consequently, a new operating model is needed: who answers for quality, who approves changes, how is the "truth" about requirements stored, and how is risk assessment conducted. Without this, you won't get productivity growth, but incident growth.
Strategic Vision & Deep Dive
My non-obvious conclusion: by late 2026, competition will move from "who writes code faster" to "who has the better trust loop." I call this the trust pipeline: specification → generation → verification → release → monitoring → rollback. Claude Code and its peers amplify only the middle, while business wins when the entire loop is closed and measurable.
In Nahornyi AI Lab projects, I increasingly build architecture as if an agent will write the code while a human manages the boundaries. This changes system design:
- Modularity and Contract-First become survival methods, not theory, under high PR frequency. The better defined the contracts (OpenAPI/AsyncAPI, event schemas, SLAs), the safer "AI speed" becomes.
- Requirements turn into executable artifacts: acceptance tests, golden datasets, migration checks, policy checks. I insist that the "spec" is not a PDF, but a set of validations the agent cannot bypass.
- Data and Access are central to any AI integration. An agent with access to the prod database and command execution capability isn't an assistant; it's a new privileged subject. I design minimal rights, isolated environments, action logging, and mandatory review-gates for risky operations.
I also don't believe in universal "zero edits" as a mass reality without changing the engineering environment. What works for Cherny at Anthropic (ideal tools, deep system understanding, specially tuned workflow) breaks in a typical company against legacy code, flaky tests, and unwritten business rules. Therefore, I view the forecast not as the "end of a profession," but as a deadline for process restructuring: either you learn to feed agents high-quality constraints and verify results automatically, or your competitors' speed will become unreachable.
The hype here is simple: "AI will write everything." The benefit is harder: "We built a change factory where AI is the labor force, while quality and risk are managed parameters." This will be the difference between companies that just bought a tool and companies that turned it into an advantage.
If you want to verify how Claude Code/agentic approaches fit your development—I invite you to discuss your delivery pipeline, requirements, tests, and security. Write to Nahornyi AI Lab: I, Vadim Nahornyi, will analyze your case and propose a practical AI solution architecture with clear implementation steps.