Technical Context
I would immediately temper the euphoria here. The headline figure in the news is 86.1 million weekly downloads for Codex versus 7.2 million for Claude Code, according to TickerTrends. But the primary takeaway isn't that "Codex has won forever," but that OpenAI clearly struck a nerve with developers after its recent update.
I tend to view such spikes not as a popularity contest, but as a signal for AI implementation within teams. If a tool suddenly captures this much attention, it means people have widely discovered a shorter path from a prompt to working code, which directly impacts real AI automation in development.
However, there's a crucial nuance with the numbers. I don't see independent confirmation of the 86 million figure in the available context, and other public benchmarks for Codex suggest millions of weekly users and developers, not tens of millions of downloads per week. So, I'd treat these values as a market indicator rather than an accounting truth.
What could have worked technically? Judging by the April updates to Codex, OpenAI expanded its use cases: more agency, a more convenient environment, broader integrations, and a better loop between the task, context, and result. For such products, this is critical: it's not the model itself that sells the tool, but the friction at every step.
And this is where Claude Code faced an unfavorable comparison. Even if many appreciate the quality of Anthropic's model, the dev tools market often votes not with benchmarks, but with what's quicker to install, behaves more predictably in the IDE, and integrates more easily into a team's existing AI architecture.
What This Changes for Business and Automation
I see three practical conclusions. First, when choosing a coding assistant, you can no longer just look at "who writes functions smarter." You need to test the entire loop: onboarding speed, integration stability, context control, and the cost of an error.
The second point is about money. If Codex genuinely reduces friction, teams can reach production scenarios faster: generating CRUD, refactoring, writing tests, creating internal documentation, and building simple agents for developer support. This is no longer a toy but a foundation for automation with AI within engineering processes.
Those who choose a tool based on hype without considering the architectural consequences will lose out. At Nahornyi AI Lab, we solve precisely these kinds of problems for clients: we don't just plug in a trendy AI tool, but build a working system where the model, access rights, IDE, repositories, and quality control don't conflict with each other.
If your team is already drowning in the routine of code reviews, boilerplate code, and internal tech support, we can systematically analyze your process and build a custom AI solution development plan for it. At Nahornyi AI Lab, I usually start not with the model, but with the bottleneck, because that's where AI automation delivers real impact, not just a flashy demo.