Technical Context
I find myself more frequently forgoing a separate plan mode. With the GPT-5.5 and Codex combination, the model genuinely maintains the task's rhythm better: it first outlines the work, then moves to implementation without needing constant prodding from me. For practical AI implementation, this is a very welcome shift.
But I'll pump the brakes right there: OpenAI's official materials don't explicitly promise that the model can reliably replace a dedicated planner. The documentation suggests something else: GPT-5.5 is stronger in agentic workflows, supports reasoning.effort, works via the Responses API, and uses tools more effectively. This isn't the same as a guaranteed automatic switch between 'plan' and 'act' phases.
I've dug into the available guides, and the picture is this: GPT-5.5-Codex has indeed become more confident in long engineering tasks, getting stuck less often in "here's my plan" and more frequently proceeding to action. However, if you need a predictable production pipeline, the surrounding framework still matters: modes, tool policies, limits on idle responses, and sometimes an explicit strict-agentic layer.
So, I completely understand the user sentiment that "a planner is no longer needed." From a UX perspective, that's true: the friction is lower. But from an AI architecture standpoint, I'd phrase it more carefully: it's not the abolition of the planner approach, but an increase in the model's baseline ability to manage its own work cycle.
Impact on Business and Automation
The first consequence is simple: less manual micromanagement in development. Where previously you had to separately coax a plan out of an agent and then push it into execution, now a part of this mechanic can be removed, speeding up AI integration into products.
Second, it's not the inference itself that becomes cheaper, but the orchestration around it. Fewer service steps, fewer redundant messages, and simpler AI automation scenarios where the task doesn't require a strict audit of every step.
But those who discard control too early will lose out. In sensitive processes where verifiability, approvals, and action tracing are important, a separate planning phase is still valuable.
I'd look at it this way: for internal dev tasks and quick agents, you can confidently simplify the flow. For production systems with a risk of error, it's better to test on your own scenarios rather than believing in magic. If your team's speed is bottlenecked or your agent either plans endlessly or stalls, we can analyze your process together: at Nahornyi AI Lab, I typically fine-tune these setups into coherent AI automation without the unnecessary theatrics around a "smart agent."