Technical Context
I appreciate the small details that actually save hours rather than just looking like another flashy demo feature. The Codex app has exactly that: I open a diff, select a specific piece of code, hit Comment, and it goes into the chat as a precise input for the next iteration.
For AI implementation, this is a very healthy mechanic. I'm not writing an abstract "make it better somehow," but instead tying my feedback to the exact place where the model made a mistake, over-engineered something, or violated the project's style guide.
Essentially, the interface brings the review of AI-generated changes closer to a normal human code review. But instead of a long back-and-forth across files, I immediately close the loop: saw a problem, marked it, sent it, and got a new fix.
This works especially well where an agent is already generating a plan or a batch of changes, and I want to manage local decisions rather than the entire task at once. It means less noise in the chat and a lower chance that the model will lose context and start fixing the wrong thing.
The second part of this story, which also resonates with me, is Zed. Many developers perceive it as a "frictionless editor": fast scrolling, instant response, and a smooth UI. The reason is clear: it's built on Rust and uses its own GPU-based UI rendering.
The point here isn't about joining an editor fan club, but about everyday mechanics. When an IDE doesn't lag during navigation, search, or context switching, the agentic workflow feels lighter. You get less frustrated, approve steps faster, and maintain momentum more easily.
Impact on Business and Automation
I wouldn't overstate it: neither Codex comments nor Zed alone will make a team productive. But they do remove the minor friction that adds up to a loss of speed.
Who benefits? Teams where AI integration has already progressed to actual code review, not just "playing around in a chat." For them, inline diff comments reduce the number of unnecessary runs, and a fast editor makes the cycle itself shorter.
Who doesn't benefit? Those who lack proper review guidelines, task structures, and boundaries for the agent. I see this regularly: without a clear AI architecture, even a user-friendly interface just masks the chaos.
If your code reviews with AI are already getting scattered across chats, tabs, and lost edits, this is the moment to rethink the process. At Nahornyi AI Lab, we ground these concepts into a working system: from AI automation to a proper review loop, ensuring the agent eliminates work, not creates more of it.