Skip to main content
OpenAICodexGPT-5.5 Pro

How to Sneak GPT-5.5 Pro into Codex

Users found an unofficial way to use GPT-5.5 Pro within Codex. The agent communicates with ChatGPT through its built-in browser, passing files and retrieving results. For AI automation, this is a crucial temporary bridge to stronger reasoning capabilities where direct access to the Pro model is not yet available.

What's the Discovery?

I love findings like this: they don't come from documentation but from tinkering with the tool. The idea is simple: GPT-5.5 Pro isn't officially available as a subagent model in Codex, but you can use it as an external brain through the agent's built-in browser.

The setup is almost hacky, but it works. Codex sees the thread in the browser, can send messages, upload files, and then retrieve the response. Essentially, I get a makeshift AI integration between Codex and ChatGPT, where Pro acts as a subagent without a native "enable Pro" button.

And here's an important clarification. This isn't a standard OpenAI feature but a user-discovered lifehack. Official materials on subagents mention the standard GPT-5.5 model lineup, while Pro remains an API-only option with more compute, not integrated into Codex as a separate mode.

So technically, it's not a "Codex subagent" in its purest form but an external loop: Codex operates within its container and tools, and for a complex piece of reasoning, it sends the task to ChatGPT via the browser. If you've ever built an AI architecture from several incompatible services, the picture is familiar.

Where This Is Actually Useful

I would use this trick where Codex is already good at handling files and editing code but starts to struggle with heavy analysis. For example: parsing a large thread, architectural assessment, controversial refactorings, long reasoning tasks.

The winners are those who need to quickly enhance their agent without full-scale AI solution development around an API proxy. The losers are those who need reliability: the workflow is fragile, unofficial, and could break after any UI update or browser restriction.

I definitely wouldn't build a critical production system on this. But as a temporary bridge to test a hypothesis, build AI automation, and see if Pro gives a noticeable boost on your tasks, it's a solid idea.

If you have a similar situation and your team is already hitting the limits of the standard pipeline, I'd look at the entire process. At Nahornyi AI Lab, we build these connections carefully: determining where a native Codex is needed, where an external reasoning loop is appropriate, and where it's time to build AI automation without these workarounds and with proper business-grade stability.

We previously covered the architectural challenges of integrating AI, specifically analyzing the 'Codex 5.2' case and distinguishing practical AI architecture from mere demos. This context is crucial when considering how to build and operate subagents like the one discussed within such environments.

Share this article