Skip to main content
LLMфондовый рынокAI automation

LLMs, Elliott Waves, and News: Finding the Real Value

The core idea is simple: LLMs don't turn Elliott Waves into a precise trading algorithm, but they can dramatically enhance analysis by adding news context. This is crucial for AI implementation because the model excels at explaining market regimes rather than guessing exact entry points, making it a powerful analytical tool.

Technical Context

I love conversations like this because they quickly strip away the mask of unrealistic expectations. To be brutally honest, LLMs don't operate in a world of strict algorithms but in a realm of statistical patterns. When I design AI integration for market-related tasks, I start with this limitation in mind.

The problem with Elliott Waves is an old one: you can always label a chart beautifully in hindsight, but in real-time, it's almost always debatable. It's unclear where a wave began, which pattern is currently active, and whether the next news event will shatter everything. Therefore, the theory itself is useful as a descriptive language but weak as a standalone predictive engine.

Against this backdrop, LLMs are useful, just not in the way everyone dreams. They are quite good at gathering context, proposing several labeling hypotheses, explaining why a movement looks like an impulse or a correction, and, most importantly, linking the chart to text. This was technologically difficult 13 years ago, but now it can be assembled into a working system.

I've looked at where the research is actually heading: multi-agent schemes, RAG over analytics, a separate layer for news, and another for price action. This is a sound architecture. If you mix everything into one pot, the model starts confusing chart structure with a compelling narrative from headlines.

That's why I prefer this approach: don't ask an LLM for a precise price or a reversal point. Instead, make it generate 2-3 plausible scenarios, highlight what confirms them, and specify where each scenario breaks down. Here, the model is playing its own game, not pretending to be a magical Bloomberg terminal.

Impact on Business and Automation

For practical application, the conclusion is stark: those who build a decision-making layer, not a 'market oracle,' will win. LLMs can be implemented as a reasoning layer on top of technical indicators, news, and risk rules. This looks like useful automation with AI, not an expensive toy.

Those who expect algorithmic certainty from a statistical model will lose. If you don't distinguish between your hard rules and your probabilistic hypotheses, the system will make confident and very expensive mistakes.

I would add one more practical criterion: this kind of stack should never be deployed with real money without walk-forward tests, control for look-ahead bias, and separate validation of news vs. chart signals. At Nahornyi AI Lab, we solve these very intersections for clients: where to keep deterministic code, where to add an LLM, and how to build AI solutions for business so that they don't fall apart at the first sign of market noise.

If you already have an analytical pipeline but it's drowning in news, hypotheses, and manual charting, let's break it down layer by layer. At Nahornyi AI Lab, I can help build AI automation where the model doesn't promise magic but actually eliminates routine tasks, speeds up analysis, and leaves humans with only the decisions worth fighting for.

The challenges of applying LLMs to complex, non-algorithmic tasks like market prediction underscore the need for rigorous evaluation. We have also explored methods to measure LLM-as-a-Judge reliability using IRT metrics, which can help ensure consistent quality control and reduce automation risks in production environments.

Share this article