Skip to main content
ClaudeEMSнейроинтерфейсы

Claude Taught to Control a Body via EMS

A researcher built a prototype in 48 hours where a camera analyzes a scene via Claude, and EMS signals move a hand with minimal human input. While not a market-ready product, it’s a key signal for business: AI integration is moving beyond software into physical control of devices and bodies.

Technical Context

I love projects like this not for the hype, but for the architecture. What they've built is a genuine chain: camera → Claude 3.5 Sonnet → motion JSON → EMS impulses → physical action. This is no longer just a chatbot; it's tangible AI automation at the intersection of vision and actuators.

The prototype was made in 48 hours back in October 2025, so I'm looking at it now as a proven reference rather than breaking news. The sources are solid: a LinkedIn post by Endrit Restelica, a YouTube video, and an open GitHub repository with the pipeline.

I dug into the specs, and the most interesting part isn't a record in Beat Saber, but that the setup works at all on accessible hardware. The input comes from a 1080p/60fps webcam, computations run on a Raspberry Pi 5, Claude receives frames and returns a structure like target_pose, muscle_groups, and intensity, and then Python and Arduino translate this into EMS pulses.

The claimed latency is around 142 ms for the frame-to-muscle chain. For fine motor skills, this is still a bit crude, but for rhythmic, predictable movements, it's enough to make the system look like a working control loop, not a magic trick.

The limitations are also stated honestly: muscle fatigue after 20-28 minutes, safety only with current limiting, and no proper sensory feedback. And yes, Anthropic doesn't position this for medical applications, so I would immediately separate this research prototype from a product.

What This Changes for Business and Automation

I don't see a market here for "AI plays VR for you," but a more useful vector: artificial intelligence integration is getting closer to physical operations. It's not just about analyzing video, but immediately triggering an action: exoskeletons, rehabilitation, industrial manipulators, or motor pattern training.

The winners are teams that can build the full loop: vision, model, a safe controller, telemetry, and an emergency shutdown. The losers are those who think that just slapping an LLM onto a piece of hardware and calling it a product is enough.

In client projects, I constantly run into the same reality: the hardest part isn't the model but the reliable AI architecture between the software and the physical world. At Nahornyi AI Lab, we solve these very bottlenecks when a client needs a clear AI solution development tailored to their process, risks, and real-world constraints—not just a demo.

If you have a task brewing that requires linking computer vision, signals, and action in a single loop, let's look at it without the magic. Sometimes, careful AI integration is enough to remove manual operations, speed up the cycle, and stop making people do what a machine can already do better.

Similar AI advancements controlling the physical world always raise questions about practical applicability. We've previously discussed how the lack of a thoughtful architecture can turn impressive embodied AI demos into something mythical, devoid of real-world implementation.

Share this article