Skip to main content
AI coding agentsAI automationразработка с ИИ

Superpowers vs. Short Iterations: Which Is Really More Convenient?

The debate around Superpowers highlights a key question in AI development: are long, TDD-style specs better than short, human-guided iterations? For businesses, this choice impacts token costs, review speed, and the risk of creating an expensive, uncontrollable 'black box' system, making the iterative approach often more practical.

Technical Context

I got hooked on this case not because of the drama around the tools, but because of a very familiar pattern: as soon as AI automation in development becomes too verbose, the token counter skyrockets, and human control diminishes. Here, this is visible almost under a microscope.

The scenario is simple. The task is local: switch saving in an Elasticsearch repository to the bulk API. The repository itself is about 500 lines, plus some surrounding code. Then Superpowers spins this into a 2700-line specification, with code examples, tests, questions, a TDD ritual, and 14 commits in about 2 hours.

And this is where I would also pause. Not because TDD is bad, but because reviewing 2700 lines for a medium-sized change is, to put it mildly, not a gift. Formally, the agent did a great job; practically, I'm now paying not only with tokens but also with my team's attention.

In the alternative approach, which the user described using Matt Pocock's skills and switching to Codex, the rhythm is different: a short plan, a short iteration, reviewing the final code, and discussing unclear parts with the agent. I personally find this mode more sustainable when you need to keep the architecture in your hands, rather than accepting another neatly packaged black box.

Yes, from the outside, it looks slower than throwing in a large spec and going for coffee. But in practice, a short context is almost always cheaper, more predictable, and fits better into AI integration within a live project, where the code has already accumulated history, compromises, and odd edges.

A separate important point: there are no direct benchmarks here, and I wouldn't pretend this is a laboratory truth. For now, these are mainly strong user observations, but they align well with what I see in real agent pipelines.

What This Means for Business and Automation

Teams that need managed AI solution development—not 'autopilot at any cost'—are the winners here: less context, faster reviews, lower cost per cycle. This is especially true where frequent, safe edits are more important than a demonstrably autonomous agent.

Scenarios where an agent is given too much freedom on small tasks lose out. The expensive thoroughness consumes the benefits, and a human still has to verify the result.

I would put it this way: a verbose TDD approach is good when the task is genuinely large and needs to be formalized almost like a mini-project. For everyday product development, compact iterations are often simply more cost-effective.

At Nahornyi AI Lab, we analyze these very bottlenecks in teams: where an agent is needed, where a good cycle with a short context is sufficient, and where AI architecture has started burning the budget for no reason. If you have a similar story with expensive and unwieldy agents, let's look at your process together and build an AI automation setup that fits your actual workflow, not just a fancy demo.

A related part of this discussion, particularly concerning the practical implementation of advanced concepts, is our analysis of how AI architecture separates effective solutions from mere demonstrations. Understanding the underlying structure is crucial for achieving tangible results, rather than relying on superficial promises.

Share this article