Skip to main content
Codex CLIAI automationmulti-agent

Codex 0.128.0 Pushes Towards Autonomous Operation

Codex CLI 0.128.0 now allows enabling experimental /goal and other hidden multi-agent features. This is crucial for businesses as AI automation evolves from a simple chat to a task performer with memory, state, and a more autonomous loop, capable of handling complex, long-running tasks.

Technical Context

I wouldn't call this a 'minor update'. Codex CLI 0.128.0 hides an experimental feature, /goal, which immediately sparked my engineering curiosity: this is no longer just a dialogue with a model, but the beginning of proper AI automation within a code agent.

In practice, you need to update to 0.128.0 and enable the feature separately via codex features enable goals. In some builds, it can also be enabled through config.toml by setting the flag features.goals = true. If the feature doesn't appear, the problem is usually not on your end, but because it's still in the experimental layer.

Once activated, the /goal command appears. I see it as a 'long-lived task': not a one-off prompt, but a goal that Codex pursues over several turns until it's completed, hits a limit, or you pause it.

This closely resembles the Ralph loop approach many have discussed: the agent maintains an intention, continues its work, and doesn't fall apart after each new message. For AI integration into dev processes, this is far more important than another cosmetic command.

Based on discussions, there's another batch of experimental features nearby: artifact, chronicle, code_mode, memories, multi_agent_v2, plugin_hooks, remote_control, runtime_metrics, unified_exec, and others. I haven't seen a full official list in proper documentation yet, but the direction is already clear.

Three things caught my attention the most: goals, memories, and multi_agent_v2. If they integrate well, Codex will start managing long engineering tasks not as a single tired assistant, but as a system with state, subtasks, and role distribution.

Impact on Business and Automation

For teams, this means one simple thing: some tasks can be delegated not for a 'single answer' but for 'completion to a result'. Migrations, refactoring, tackling technical debt, preparing PRs, processing files, and checking configs—all become closer to autonomous execution.

Those who already think in terms of AI architecture, rather than 'give us another chatbot', will win. Those who implement the feature without controlling budgets, access rights, and logging will lose, because an agent with a goal and an agent without limits are very different beasts.

At Nahornyi AI Lab, we solve these exact problems for clients: we don't just flip an experimental switch, but build a secure environment where automation with AI truly saves hours instead of creating new risks. If you've been wanting to automate code reviews, migrations, or internal dev workflows, we can analyze it together and build an AI solution development tailored to your process, not just a fancy demo.

We also previously analyzed how the absence of a well-thought-out AI architecture can turn demo projects into myths, using the Codex 5.2 case as an example. This highlights the importance of a deep understanding of the platform when exploring its new features.

Share this article