Skip to main content
Claude CodeHTMLAI automation

HTML Instead of Markdown for Agent Outputs

An idea is gaining traction around Claude Code: complex agent outputs are better in HTML than Markdown. This is crucial for AI automation because the bottleneck is often not token cost, but the speed at which humans can understand diffs, architecture diagrams, or verification results. HTML offers a richer, more interactive interface.

Technical Context

I got hooked on this case not because of another formatting debate, but because it directly impacts practical AI implementation: an agent can write whatever it wants, but if I can't digest it quickly, it's not very useful. Tariq from the Claude Code team put it bluntly: nobody reads long markdown reports, ASCII diagrams fall apart, and character-based tables strain the eyes.

And I tend to agree. When I run an agent on a complex AI architecture, I don’t need a mile-long text scroll. I need an artifact that helps me make a decision: where are the risks, what needs to change, and where to look next.

Karpathy supported the idea from another angle: our visual channel is too powerful to keep cramming everything into linear text. This isn't about 'HTML is cooler than Markdown.' It's about agent outputs increasingly resembling a small interface rather than just a note.

The most practical part of this isn't in the tweets, but in the open-source visual-explainer skill by Nico Bailon for Claude Code. It adds commands like /diff-review, /plan-review, /project-recap, /fact-check, and /generate-web-diagram. The output isn't another .md file, but a self-contained HTML document that opens directly in the browser. There's also a --slides flag to turn the result into a slide deck.

I like that this is a working pattern, not just a theoretical philosophy. HTML provides collapsible sections, color hierarchy, proper diagrams, navigation, and screen composition. Yes, it will consume more tokens than Markdown. But in real-world reviews, I more often hit the limits of my own cognitive bandwidth than the cost of the output.

Impact on Business and Automation

For business, the conclusion is simple: if an agent writes reports that humans read, the format suddenly becomes part of the ROI. A good HTML output speeds up code reviews, architectural discussions, and fact-checking more effectively than another paragraph of text.

Teams that make decisions based on complex artifacts win: product, engineering, consulting, and auditing. Purely text-based workflows lose out if they are already cracking under information overload and people are scrolling instead of understanding.

But I wouldn't turn this into a religion. Markdown is still better where you need a lightweight, editable, git-friendly output. HTML should be enabled when the agent is creating a decision-making interface, not just a note.

In fact, at Nahornyi AI Lab, we design these kinds of branching points for clients: where to stick with text, where to build a visual layer, and how to implement artificial intelligence integration so the team actually works faster, not just admires a demo. If your agent is already writing something but no one wants to read it, let's analyze the workflow and build an AI automation solution tailored to your real workload, not just a pretty presentation.

Beyond just the format of its output, Claude Code is actively used in practical development workflows. We previously covered how parallel Claude Code agents can effectively catch race conditions during pull request reviews, showcasing its utility in ensuring code quality.

Share this article