Skip to main content
VercelContext.aiкибербезопасность

Vercel Was Hacked Through an AI Tool

Vercel disclosed a supply chain attack via Context.ai: a compromised third-party AI tool gave attackers access to an employee's Google Workspace and several environments. For businesses, this is a clear warning to reassess AI integrations, OAuth permissions, and the classification of sensitive environment variables.

The Technical Context

I appreciate cases like this not for the drama, but for the transparent mechanics of the attack. It's an all-too-familiar scenario for anyone integrating AI in real companies: a third-party AI service, broad OAuth permissions, one compromised account, and then a domino effect.

According to Vercel, the entry point wasn't on their end but with Context.ai, a tool used by an employee. By compromising this third-party tool, the attacker hijacked the Vercel employee's Google Workspace account and, from there, accessed some internal environments and environment variables that were not marked as sensitive.

The critical detail is that Vercel claims sensitive-marked variables were not accessible in a readable format. This is where I really had to pause: the entire difference between an 'unpleasant incident' and a 'total nightmare' came down to the simple discipline of classifying secrets.

Additional context paints an even starker picture. It appears that Context.ai had its OAuth tokens stolen after an employee's machine was infected with an infostealer. The attacker then used these existing permissions to bypass the perimeter through a trusted integration. This is a textbook supply-chain attack, just with an AI label on top.

What strikes me most here isn't the breach itself, but how easily teams still view AI tools as 'just another convenient SaaS.' No, it's now part of your identity plane and your AI architecture, especially if the service requests access to Workspace, email, documents, logs, or deployment tokens.

What This Changes for Business and Automation

First, OAuth for AI services must be treated as production-level access, not just a harmless 'Sign in with Google' button. If you're building AI automation and connecting external tools to Workspace, Slack, GitHub, or Vercel, you already have a supply-chain attack surface.

Second, environment variables without proper classification are like a forgotten landmine. It doesn't matter if you consider them 'not very sensitive' if they can be used for lateral movement or to harvest API keys and deployment tokens.

Third, teams that practice a zero-trust approach to any AI agent and third-party integration will win. Those who grant broad scopes, fail to audit issued tokens, and manage secrets with a 'we'll sort it out later' mindset will lose.

I constantly see the same mistake with my clients: they want rapid artificial intelligence implementation but treat connection security as a secondary task. And then, it's precisely this security that determines whether you get process acceleration or a costly incident.

If you already have a growing zoo of AI tools, I would immediately review your OAuth grants, secret classifications, and agent permissions. If needed, we at Nahornyi AI Lab can analyze your setup and build an AI automation framework that saves you time, rather than opening a side door for the next breach.

We have previously explored how AI agents can be exploited to bypass security measures within systems. Specifically, we analyzed practical cases where agents achieved sandbox evasion through command chaining, demonstrating a critical vector for system compromise.

Share this article