Skip to main content
cybersecuritynpmclaude-code

Malware in AI Tools Survives Uninstall

A malicious npm package was found to achieve persistence by modifying .claude/settings.json and .vscode/tasks.json files. This means `npm uninstall` won't remove it. For businesses, this poses a silent risk to development tools, compromising workflows that rely on AI integration and automation, as the malware can re-execute itself through legitimate IDE hooks.

Technical Context

I took note of this story not because of yet another npm incident, but because of its persistence mechanism. Here, the malware doesn't cling to the package itself. It uses normal, legitimate hooks in Claude Code and VS Code to re-launch itself on tool-specific events.

According to published reports, the goal is simple and malicious: to write itself into .claude/settings.json and .vscode/tasks.json. After this, a standard npm uninstall no longer fixes the problem because the re-entry point remains in the configuration. This elevates the attack to a different class of supply chain threat.

I always evaluate such things from a practical AI integration standpoint, not just security theory. If a team uses Claude Code for generation, refactoring, or internal AI automation scenarios, a compromise turns from a simple machine infection into persistent access to the developer's workflow.

What's particularly nasty here is that the attacker isn't exploiting some exotic vulnerability but is using what's considered a standard IDE feature. This means many teams could spend weeks searching for a malicious package, while the real persistence mechanism is already embedded in their local project or user settings.

For now, public technical details are scarce, and we must be honest about that. But even without a full set of IOCs, the picture is clear: if an npm package has ever modified these files, removing the dependency does not equal cleaning the environment.

I would look at three things at a minimum: unexpected task entries in VS Code, suspicious hooks in Claude Code, and any auto-start scripts that appeared without an explicit team decision. If you have repository templates, devcontainer settings, or bootstrap scripts, those are also worth checking.

What This Changes for Business and Automation

The first consequence is trivial but costly: trust in AI developer tooling can no longer be based on a dependency list alone. You need to control configurations and post-install behavior; otherwise, AI implementation in development becomes an unnecessary entry point.

Second, teams without a basic hardening policy for their IDEs and agentic tools will lose. Those who maintain reference configurations, check for drift, and separate local experiments from the production chain will win.

At Nahornyi AI Lab, we specialize in bridging these gaps between security and automation with AI. We focus on building a proper AI architecture with verifiable hooks, sandboxing, and dev-process auditing, not just a flashy demo agent. If you have AI tools integrated into your development and want to understand where such a threat could be hiding, let's analyze your workflow and build AI automation without hidden surprises.

The challenge of securing AI development environments against such persistent threats is paramount. We have previously examined how AI agents can bypass sandboxes through command chaining, providing crucial insights into the risks involved and the necessary control mechanisms for secure AI execution.

Share this article