Technical Context
I got interested in this case not as a fancy demo, but as a hint of real AI automation in design. The scenario is interesting: Codex doesn't just write code; it goes through almost the entire prep for a UX/UI task, from analyzing a flow to generating screens.
Here's how it works. It opens a link to a flow in an embedded browser, extracts screenshots on its own, and tries to understand the onboarding structure. Then, it goes to the web to gather context on monobank: positioning, onboarding, verification, brand elements, and the overall product presentation.
Then comes the most interesting and most flawed part. Codex generates wireframes via image generation, checks them itself, makes corrections, and then proceeds to specific screens. For each screen, it pulls in patterns, platform guidelines, card limitations, and iOS specs, and then prepares specs and assets based on this information.
And here, I wouldn't overestimate the magic. In essence, this isn't "AI drew the perfect interface," but an agentic pipeline where the model glues together research, visual references, generation, and self-checking into a single process.
The limitations are also very down-to-earth. Image generation is finicky with things it wasn't well-trained on: like liquid glass, complex materials, precise dimensions, and consistent spacing. Plus, consistency suffers: one day a screen looks great, the next day the one beside it looks like it's from a different universe.
What This Changes for Business and Automation
I see this not as a replacement for a strong product designer, but as an accelerator for teams where the bottleneck is in research and rough screen production. At the AI implementation stage, you can quickly run through 3-5 onboarding directions without weeks of manual reference gathering and initial wireframing.
Product teams that need speed benefit the most: banks, fintech, SaaS, mobile products. Those who expect pixel-perfect results out of the box and don't account for human oversight in AI integration lose out.
To be very practical, I would pair these tools with a designer and a PM, not use them as a replacement. At Nahornyi AI Lab, we build such AI solutions for business: where an agent handles research, structure, and rough generation, allowing the team to spend time on decisions, not on routine tasks.
If your UX team is drowning in repetitive flows, research, and endless first drafts, this is something that can be addressed concretely. I would look at what Vadym Nahornyi and Nahornyi AI Lab are doing not as a toy, but as a way to build AI automation around your design process so that people move pixels less and move the product forward more.