What Anthropic Actually Rolled Out
I dove into the feature description because the headline sounds big, and the devil is in the details. In early March 2026, Anthropic enabled a Memory Import function in Claude, and it's more than just “paste the whole chat.” Claude takes an export from another AI service, processes it, and converts it into its own memory objects.
So, it's not a literal one-to-one transfer. I would call it an interpreted migration: you provide the material, and Claude extracts stable facts, preferences, work habits, and project details, saving them as its own memory edits.
The import is initiated manually via Settings → Capabilities → Memory. Anthropic provides a special prompt to request an export from another provider, or you can bring in Markdown manually. After pasting, Claude shows the changes, which can be reviewed via Manage Edits. The memory itself might not update instantly but within 24 hours.
Right here, I made a mental note: this is convenient, but not deterministic. The documentation explicitly states that the feature is experimental, not every entry will be transferred, and Claude holds onto work context better than personal “just-in-case” notes.
Why This Is Particularly Interesting for Dev and Agentic Scenarios
What caught my attention in this news wasn't the “memory” itself, but the reduction of friction when switching tools. If a team has accumulated a layer of habits, naming conventions, architectural details, stack specifics, code style requirements, and project constraints, all of this previously had to be re-entered into a new model. This is a tedious and expensive switching tax.
With Memory Import, this tax is lower. Not zero, but lower. For a developer workflow where people jump between Claude, ChatGPT, Gemini, local tools, and code agents, continuity suddenly becomes a practical tool rather than just a marketing buzzword.
This is especially relevant as some developers have been actively comparing Claude Code, Opus, and Codex in recent weeks. I see discussions trending in the same direction: if the quality of the primary model dips, people don't want the extra hassle of re-explaining the project to a new tool. And this is where memory import sharply reduces the pain of migration.
Where's the Real Value vs. Marketing Hype?
Realistically, this isn't a “unified memory across all AIs.” It’s a bridge. An uneven, sometimes manual, but already useful one. Claude doesn't promise to save everything, and that's fine: memory in models doesn't transfer well as a strict database anyway.
But for teams with long-running assistants, internal documentation, pre-sale bots, engineering copilot scenarios, and AI automation around development, even a bridge like this saves hours. Not because the model got smarter, but because people repeat themselves less.
I would especially recommend this to those building AI solutions for business not as a “single chat for inspiration,” but as a system: agent, CRM, knowledge base, task tracker, coding environment. In such an AI solution architecture, context costs money. If it can be transferred between layers of the ecosystem, implementation becomes noticeably more practical.
The losers here are essentially the vendors who kept users locked in through accumulated history. When memory can be at least partially taken with you, the choice of model starts to depend more on actual quality and price, rather than the fear of losing established context.
What I Would Do as a Team Right Now
I wouldn't treat Memory Import as a reason to mindlessly migrate everything. I would run a short test on one live process: for example, transferring memory for a coding assistant, support agent, or internal analytics bot. It will immediately become clear what transfers well and what's better stored outside the model's memory, in a proper knowledge layer.
This, by the way, is the main engineering takeaway. A model's memory is convenient, but it shouldn't be the single source of truth. At Nahornyi AI Lab, we usually separate this into two layers: the agent's operational memory and an external context database. This makes AI implementation more robust, and changing models no longer seems like a mini-disaster.
I'm Vadym Nahornyi, and at Nahornyi AI Lab, I build these things by hand: AI agents, n8n scenarios, integrated AI loops, and work pipelines where context doesn't fall apart after the first model switch.
If you want to discuss your case, order AI automation, create an AI agent, or build an n8n automation for your process - get in touch. I'll figure out how to best organize it across memory, data, and tools without any unnecessary magic.