Skip to main content
swiftuielectronai-automation

AI Writes UI: Electron Is Fine, SwiftUI Is Still Lagging

Currently, AI is significantly better at generating UIs for Electron and the web stack than for native SwiftUI. For businesses, this directly impacts AI implementation: a quick prototype is now feasible, but a production-ready native interface still requires substantial engineering refinement. This affects speed-to-market and UX quality choices.

Technical Context

I've been following the recent discussion around Tolaria and, honestly, none of it surprises me: current models are genuinely more comfortable with Electron than with native SwiftUI. For AI automation, this is a practical reality, not some theoretical debate from a group chat.

I see this in my own tests as well: React, HTML, CSS, and the entire web ecosystem are much simpler for these models. The structure is predictable, training data is abundant, and there are fewer platform-specific quirks that can break the result at the worst possible moment.

With Electron, an AI model can usually assemble an interface that at least launches, looks cohesive, and doesn't fall apart after the first edit. With SwiftUI, it's a different story: it might generate a basic screen, but as soon as state management, navigation, macOS system patterns, or even just precise element behavior come into play, it's time for manual surgery.

The comment about a "clunky website instead of an app" particularly resonated with me. It's spot-on. AI-generated Electron apps often have little giveaways that expose their web roots: strange text selection, an off-beat scroll rhythm, a weak platform feel, and compromised keyboard shortcuts.

But here's a key distinction: this doesn't mean Electron is bad. It means today's models understand declarative web UIs better than native AI integration governed by the strict rules of the Apple ecosystem. On SwiftUI, the cost of an error is higher, and a great prompt can't fix everything.

Impact on Business and Automation

If I need a quick internal tool, an admin panel, or a desktop wrapper for an AI agent, I'm more likely to choose Electron right now. It's faster, cheaper, and perfect for testing a hypothesis without weeks of polishing.

If the task hinges on UX quality, low memory consumption, and a genuine macOS product feel, I wouldn't delude myself into thinking a model can generate it in one go. This requires a proper AI architecture: deciding what to generate automatically, what to leave for humans, and where to implement quality control.

The winners are teams that need speed-to-market. The losers are those who promise clients a "premium native UX from a single prompt."

These are exactly the kinds of decisions I help clients navigate: determining where an AI solution on a web stack is sufficient and where it's better not to cut corners on the native part. If your product's success depends on its interface, let's look at the scenarios together. At Nahornyi AI Lab, I can help you build an AI implementation that won't burn your budget on beautiful but brittle generative magic.

The discussion of how AI models face difficulties generating UI in languages like Swift is part of a broader conversation about AI-generated code quality. We previously covered how using AI in development can lead to lower code quality and a higher total cost of ownership.

Share this article