Technical Context: What Chrome EPP Actually Delivered
I took a close look at the WebMCP early preview in Chrome (EPP) and identified the core concept: a website is no longer just a "page for reading" but a set of native tools that an agent calls as functions. Not through DOM-guessing or OCR, but via a browser API with validated arguments.
The protocol offers two paths. The declarative option attaches to <form> elements via toolname/tooldescription attributes, letting the browser build the JSON schema from fields. The imperative path involves registering a tool from JS: navigator.modelContext.registerTool() with a name, description, input schema (JSON Schema / Zod), and an async execute handler.
What strikes me most isn't the syntax, but the execution: the call happens within the user session context. This means the agent operates with the same cookies and authorization, without re-login or passing tokens to third-party integrations.
Gates are provided for risky actions: you can request explicit user confirmation through interaction mechanisms. This is critical because while "addToCart()" is trivial, "cancelOrder()" or "changeAddress()" falls under compliance and responsibility zones.
Impact on Business and Automation: Winners and Losers
If WebMCP takes root, I expect a sharp shift in AI automation practices. Until now, browser-based agent scenarios relied on fragile layers: selectors, layout changes, anti-bot protection, captchas, and visual model instability. Here, the site itself tells the agent: "here are my actions, here is their contract."
The winners will be companies with transactional flows: e-commerce, travel, self-service banking, and B2B portals with orders and invoices. Where the cost of error is high, moving from scraping to tool-calls yields measurable effects: fewer incidents, less manual troubleshooting, and higher task completion conversion.
The losers will be those who profited from "integration via parsing" and cannot adapt quickly. Sites that fail to offer a convenient tool contract will also suffer: agents will start preferring "agent-ready" competitors simply because it is easier and more reliable for them to complete the task.
In Nahornyi AI Lab projects, I would immediately factor in WebMCP as a new layer in AI solution architecture: alongside the backend API, event tracking, and security policies. It’s not a replacement for your API, but a way to give the agent the right "lever" in the browser while keeping control with the product and security teams.
However, without engineering discipline, this will be painful. You need schemas, contract versioning, tool flow tests, observability (which tool was called, with what, how it responded), and separate work on "confirmation buttons" — to avoid killing UX while preventing abuse loopholes.
Strategic Analysis: AXO as a New Funnel and What I Would Do Now
The most underrated effect is the emergence of Agent Experience Optimization (AXO) as a parallel world to SEO. I see it this way: previously, you optimized a page for a robot indexer and a human. Now you must optimize a set of actions for an agent that "buys," "transfers," "verifies," or "processes a return."
In my AI implementations, one problem almost always surfaces: business logic is smeared across the frontend, analytics, and backend, while the agent needs a short, deterministic path. WebMCP pushes for correct decomposition: isolate intentions (searchFlights, checkAvailability, createInvoice) and give them strict inputs/outputs.
I wouldn't wait for a "standard across all browsers." I would choose 3–5 key scenarios with maximum ROI and start a prototype: cart/checkout, status check, search/filters, booking, document generation. This provides a quick signal on how agent flow conversion changes and how much support you save.
And one more thing: WebMCP isn't just for clients. Internal portals (procurement, warehouse, service desk) can be turned into an instrumental layer for corporate agents without exposing dangerous APIs externally. For many companies, this is the safest path to achieve artificial intelligence implementation not in presentations, but in daily operations.
This analysis was prepared by Vadim Nahornyi — lead expert at Nahornyi AI Lab on AI architecture and agentic automation in the real sector. I will help you design a WebMCP/agent-ready layer: select scenarios, describe tool schemas, build security and metrics, and then bring it to production. Write to me — let's discuss your product and draft an implementation plan for 2–6 weeks.