Technical Context
I view SpecKit and OpenSpec not merely as "two more CLIs," but as an attempt to standardize the conversation between a human, a repository, and a coding assistant. Both approaches share a core philosophy: capture intent in spec.md, maintain the rules of engagement in constitution.md (or equivalent), and force the assistant to operate within these boundaries rather than improvisation.
As an architect, what I appreciate about SpecKit (github/spec-kit) is that it thinks in phases and disciplines the team. In a typical scenario, it uses a /specify level command that generates a substantial package of artifacts (specs, principles, task decomposition, checks). Yes, this may involve hundreds of lines, but this "verbosity" reduces the cost of errors during early architecture—especially in greenfield monorepos.
OpenSpec has a different focus: I see it as a convenient mechanism for iterative change proposals on live code. The logic of "propose change → apply → archive as a single source of truth" fits well with brownfield projects and teams not ready to go through a heavy pre-design phase every time. Technically, this usually manifests as a structure of change folders and several AI commands that help apply the specification to the code.
However, I agree with the practical feedback from the community: both toolkits are not yet "agent-native." They lack native understanding of multi-repositories, no built-in model for passing the same feature through a chain of sub-agents, and no inherent mechanisms like plan mode/sub-agents. They are agent-agnostic—which is their strength for simple flows, and their weakness for complex ones.
- SpecKit feels better in a monorepo where you can auto-create branches, strictly enforce standards, and verify task dependencies.
- OpenSpec wins where you need to make a series of changes quickly and neatly without turning each one into a "one-week mini-project."
- For advanced AI agent systems, both require external orchestration: separate prompts, roles, commands, and handoff rules.
Business & Automation Impact
In my projects, adopting SDD almost always pays off not through "beautiful documents," but by reducing rebuilds and team conflicts. If you are building AI solutions for business, the main pain point is not code generation speed, but change management: who made the decision, where assumptions are recorded, and how we verify the result.
SpecKit and OpenSpec help exactly here: they create a contract between product, architecture, and implementation. In practice, I see three immediate effects:
- More stable reviews: we argue not about "why the assistant wrote it this way," but about "what we requested in spec.md."
- Easier onboarding: a new developer reads the constitution/spec and gets into context faster.
- Fewer regressions: when checks and acceptance criteria are explicit text, it's harder for the assistant to "cut corners."
Who benefits from SpecKit/OpenSpec right now? Teams building a greenfield product in a single repository who want discipline and are willing to invest an hour or two in proper specification before implementation. Those who expect an "autopilot" lose out: installing the CLI won't make an agent factory magically distribute features across services, repos, and environments.
Regarding AI automation within development, these tools are more about managed semi-automation. The human remains the dispatcher. Personally, I consider this a business plus: responsibility and control remain with the team, not the agent's "magic."
At Nahornyi AI Lab, I usually implement these toolkits not "as is," but as part of the process architecture: adding domain templates, naming rules, testing policies, migration limits, and logging/observability requirements. Without this, SpecKit/OpenSpec become just another markdown format that people stop updating after a month.
Strategic Vision & Deep Dive
My main takeaway for 2026: SpecKit and OpenSpec are a solid foundation for SDD, but they do not yet solve the key problem of agentic projects—managing context and transferring responsibility between parts of the system. In "normal" development, specifications and tasks are enough. In agentic systems, you also need an operating model: agent roles, handoff protocols, memory policy, stop criteria, and security boundaries.
That is why I increasingly build hybrids: I take their artifacts as a "skeleton" (spec/constitution/tasks) and build a layer of commands and skills for the specific project on top. Essentially, I assemble an internal "command kit" for the team and the assistant: how we plan, decompose, define interfaces, handle migrations, and validate. This is the real AI solution architecture at the process level, not just the microservice level.
I must separately highlight a bottleneck that surfaces almost everywhere: multi-repo and integrations. Business rarely lives in an ideal monorepo. There is an ERP, 1–3 services, a mobile app, and an infrastructure repo. SDD toolkits focused on a single repository start to "lose the edge": the specification exists, but syncing changes across repos remains manual. At this point, you either introduce orchestration (scripts, CI processes, PR rules) or move to more agentic frameworks where multi-repo is a first-class citizen.
My forecast is simple: the winner won't be the "smartest agent," but the most practical stack that can be updated every week without pain. Here, the custom set of prompts/commands discussed by practitioners often turns out to be more mature than a raw tool. Hype ends quickly; utility remains where there is specification discipline and clear execution rules.
If you want to choose an SDD approach for your product without getting buried in a raw stack, I invite you to discuss your team's context and repositories. Write to Nahornyi AI Lab—I, Vadym Nahornyi, will personally conduct the consultation and propose a process architecture tailored to your goals, risks, and deadlines.