Skip to main content
MetaAI automationlegacy code

Meta's Recipe for Using AI Agents in Legacy Code

Meta detailed how it used 50+ agents to turn legacy code and tribal knowledge into a compact map for AI automation. This matters for business because you don't need to feed the entire codebase to an AI. It dramatically speeds up research, reduces token usage, and makes old systems ready for AI integration.

Technical Context

I love these kinds of engineering breakdowns not for the slick PR, but because you can extract a workable blueprint for AI implementation in real-world legacy systems. Meta didn't try to stuff the entire codebase into the model. They built a 'compass' on top of the code, and that's what I call mature AI architecture.

The initial state was messy, as is common in production: 4 repositories, 3 languages, 4100+ files, and a ton of tribal knowledge not documented in Jira or architecture diagrams. Instead of one 'smart' agent, they deployed 50+ specialized agents across 9 phases.

I dug into the details, and the most powerful thing here isn't the number of agents, but the process discipline. For each file, analysts answered 5 questions: what it does, what patterns it contains, where the non-obvious traps are, what its dependencies are, and what unwritten rules need to be known. Then, writer agents drafted summaries, and critic agents conducted 3 rounds of vicious reviews.

Yes, vicious. Not 'lightly edit,' but tear down questionable conclusions, expose hallucinations, and find outdated references. After that, fixer agents refined the materials, and the average quality score rose from 3.65 to 4.20 out of 5.

The output was 59 compact context files, each 25-35 lines long, totaling about 1000 tokens. That's less than 0.1% of a modern context window. The idea is simple: not an encyclopedia, but short prompts that trigger precise retrieval on demand.

This is where it clicked for me. Most teams are still debating whether AI can even work with their 'special' legacy code. Meta effectively showed that the problem isn't the code's uniqueness, but the lack of a machine-readable map of micro-decisions that only exist in people's heads.

Business and Automation Impact

The practical takeaway is very down-to-earth: if you're building AI automation on top of an old system, feeding the entire repository into the context is usually foolish and expensive. A concise domain map leads to fewer calls, fewer tokens, and much more stable agent routing through the code.

Meta's numbers are impressive: 40% fewer calls, 40% fewer tokens, and research tasks that once took two days now take 30 minutes. For a team, this is no longer an 'interesting experiment' but a direct impact on maintenance costs and the speed of change.

Companies with heavy legacy systems, where expertise is scattered across people and repos, are the winners. The losers are those still hoping that the code itself is the only source of truth for an agent. In practice, at Nahornyi AI Lab, we solve these exact bottlenecks: first, we extract the system's hidden rules, and only then do we build AI solutions for business without wasting money on chaotic generation.

If your agent is drowning in old code and your team is wasting days on discovery, I'd start not with a new model, but with a knowledge map. If you're interested, let's analyze your stack and see how we at Nahornyi AI Lab can build AI automation that actually eliminates routine tasks instead of burning your budget on empty tokens.

Share this article