Skip to main content
DeFiкибербезопасностьAI-агенты

AI Agents and the Wave of DeFi Hacks

In April 2026, DeFi did lose over $600M, but the claim of confirmed AI hackers remains unproven. For businesses, the key takeaway is different: AI automation dramatically lowers the cost of finding vulnerabilities. This means the standards for monitoring, AI integration, and security architecture have already fundamentally shifted.

What I Fact-Checked

I dove into the April summaries and quickly saw a familiar problem: the numbers spread faster than the post-mortems. The major incidents were real, especially Kelp DAO and Drift, and the total damages did exceed $600 million. However, the list from the chat didn't always match confirmed names and amounts, and the story of mass AI hackers remains more of a hypothesis than a proven fact.

For instance, the discussion was about Aftermath Protocol, not Aftermath Finance. For Kelp DAO, the public analysis pointed to an infrastructure compromise, DDoS, and a flaw in the bridge message verification scheme. This is no longer the romantic notion of 'a smart contract being off by a single require statement' but a standard multi-layered hack targeting both infra and trust assumptions.

Against this backdrop, my focus isn't on the hype but on AI implementation in defense. Even if AI agents weren't confirmed in these specific cases, the cost of mass reconnaissance is plummeting. Finding a weak RPC, a poor configuration, a suspicious admin role, or flawed oracle logic is now much faster than a year ago.

Here's what unsettles me: the market still designs protocols as if attackers work manually and sleep at night. But an attacker may not sleep at all anymore.

What This Means for Business and Automation

The first consequence is simple: manual security reviews are no longer sufficient as a single layer of defense. If you operate a bridge, lending, staking, or wallet infrastructure, you need continuous AI automation to find anomalies in access rights, configurations, on-chain flows, and DevOps changes.

Second, the winners are teams that build security into their product's AI architecture, not those who bolt it on after an incident. The losers are those who rest critical assumptions on a single verifier, a single key, a single RPC provider, or a single person with access.

And third, a bug bounty of 'thanks and $2k' now looks almost like an insult. If the market doesn't pay white-hat researchers, others will monetize the vulnerabilities.

I see this in client projects as well: security can no longer be separated from automation, because automation is available to both the defender and the attacker. If you have a DeFi product, a wallet, or a high-risk Web3 integration, we can systematically analyze your architecture and build a proper monitoring perimeter. At Nahornyi AI Lab, we do exactly this in practice: from AI solution development for detection pipelines to targeted AI integration into existing processes, so your business doesn't have to wait for its own post-mortem.

The significant sums lost in DeFi underscore the critical need to understand how AI agents can be exploited to achieve unauthorized actions. We have previously detailed how AI agents might bypass security sandboxes through command chaining, offering a glimpse into the sophisticated methods potentially used in such financial exploits.

Share this article