Technical Context
I watched a recent interview with Martin Kleppmann not for general discussions about AI, but for something else: how a person from the world of data-intensive systems repackages his principles for AI implementation, now that models are getting into data, workflows, and internal APIs.
And this is where it got interesting. Kleppmann isn't selling magic; he's essentially saying that if AI is to change anything in a system, you can't just give it database access and hope for the best.
His line of reasoning is very sound: models should operate through secure interfaces where changes can be verified, discussed, and merged, almost like code. To me, this is a strong signal: proper AI automation in serious products will be built not around “the agent did everything itself,” but around controlled operations with a clear audit trail.
The second important piece concerns the data itself. Classic architecture no longer covers all AI workloads because alongside regular indexes, we now have embeddings, vector search, semantic search, and RAG. Plus, there's multimodal data, for which old storage formats are often inconvenient or simply too slow.
Another point that resonates with me is the need to maintain a low-level understanding. When there are too many abstractions and AI tools, teams quickly forget how storage engines, replication, consistency, and multi-region trade-offs actually work. Then they wonder why the agent writes beautiful code, but the system crumbles under load.
At the same time, Kleppmann doesn't abandon the foundation of DDIA. Replication remains key, and manual sharding no longer looks like a universal hero, especially in the cloud and on large machines. What's new doesn't cancel the fundamentals; it builds on top of them.
What This Changes for Business and Automation
I would highlight three practical takeaways. First: if you're building AI solutions for business, the data layer must now be designed from the start for retrieval, review, and safe change implementation, not bolted on later.
Second: teams that don't confuse demos with production will win. Those who give agents too much freedom without API boundaries, logging, and human oversight will lose.
Third: the cost of error is rising. A wrong AI integration today impacts not only the UX but also data, processes, and legal risks.
These are precisely the kinds of bottlenecks I address in projects at Nahornyi AI Lab: where RAG is needed, where proper search is enough, where an agent is necessary, and where a rigid workflow is better. If your AI automation is already hitting a wall of data chaos or dangerous access rights, we can sit down and build an architecture that truly helps the business instead of creating a new class of problems.