Technical Context
I view this discussion not as "candidate cheating," but as a new attack surface on your assessment process. In chat, colleagues are discussing how to detect if a candidate is reading answers from a screen, sharing links to textream.fka.dev — a tool dubbed a "prompter for hints." Parallel to this is the working consensus: move away from general knowledge questions and "roast" the experience from the resume — specific projects, specific decisions, specific trade-offs.
What catches my eye as an architect: technically, LLM prompts today require almost no integration. A second screen, an overlay, or a separate window is enough. Many tools of this class don't need to be widely known or indexed — they can be niche, self-hosted, and change domains quickly. Therefore, trying to "ban Textream" as an entity is a false target. The right target is to ensure that prompts don't allow a candidate to pass your filter without real competence.
The discussion mentions the observation "you can see their eyes tracking lines." Yes, behavioral markers sometimes catch prompts, but I don't build defense on this. First, it scales poorly and is subjective. Second, candidates adapt quickly: camera higher, text closer, natural pauses. Third, you risk wrongly penalizing a strong engineer with communication quirks.
Technically, you have three planes of control:
- Question Content: How tied are they to personal experience and context that an LLM lacks?
- Interaction Format: Live collaborative solving, debugging, working with artifacts, not a monologue.
- Tool Contour: Minimal proctoring/logging where justified by risk and legally correct.
Offline interviews, as colleagues noted, are indeed more reliable. But in real projects, I see that fully offline is a luxury: distributed teams, hiring speed, geography. This means we need to design "remote-resilient" interviews, not dream of returning to meeting rooms.
Business & Automation Impact
If you are hiring engineers, analysts, product managers, or even process operators, the LLM prompter turns the classic interview into a bad dataset. You think you've selected the strong ones, but you've actually bought beautiful speech and template answers. The cost of error is not just salary. It's missed deadlines, team toxicity, inflated tech debt, and a repeat hiring cycle.
Who wins in the new reality? Companies that know how to evaluate the process, not the "correctness of the answer." Losers are those whose interview is a list of questions from the internet and terminology checks. I see this constantly: as soon as a question has an "ideal paragraph" from an LLM, it stops distinguishing levels.
I rebuild interviews around candidate artifacts and your reality:
- I take a project from the resume and ask to restore context: why this DB was chosen, why this specific pipeline, what broke in prod.
- I ask trade-off questions: what would you simplify if you had to cut infrastructure costs by 30%?
- I give a short debugging task: an incident log, a piece of code, a queue schema — and watch how the person thinks aloud and where they place sensors.
Paradoxically, AI automation on the employer's side helps here too. In my practice at Nahornyi AI Lab, we implement not "AI to replace the interviewer," but automation around the process: structured assessment rubrics, auto-summaries from recordings, extraction of key points and contradictions, coverage control. This reduces noise and makes decisions consistent across different interviewers.
However, there is a nuance: if you start using AI for candidate scoring, you must keep the architecture transparent. Who made the decision and on what basis? What data was used? Where are recordings stored? The architecture of AI solutions for HR always hits not the model, but access contours, auditing, and legal grounds for data processing.
Strategic Vision & Deep Dive
I expect "prompters" to become standard just as IDE autocomplete once did. And then the question will sound different: do we hire people who can work effectively with LLMs, or people who can work without them? My answer is — both, but for different roles and with different control interfaces.
In Nahornyi AI Lab projects, I already see a pattern: companies seriously doing AI implementation in processes (support, sales, analytics, production) start requiring "tool skills" from employees. But they often fail at the basic thing — the human ability to diagnose a problem, formulate hypotheses, and verify the result. A prompter helps formulate text but doesn't create engineering intuition.
Therefore, my non-trivial advice: don't fight prompts with bans — fight them with task design. I embed "plot twists" in interviews where the template falls apart:
- I change a constraint in the middle: "now imagine GDPR/PII, logging is forbidden." A strong candidate adapts the solution, a weak one gets stuck.
- I ask to name 2 alternatives and the selection criteria. An LLM will give a list, but without internal priority and without ties to budget/risk.
- I ask about the "biggest technical challenge" and drill down into details: metrics, timeline, what was rolled back, what was measured after the fix.
The trap I often see: a company complicates proctoring, makes the interview unpleasant, and ends up losing strong candidates faster than it catches cheaters. Utility here is more important than hype: soft control + smart interview scenarios give the best ratio of accuracy to conversion.
It will only get tougher further on: remote processes will become the norm, and prompting tools — more invisible. Winners will be those who turn hiring into an engineering system: measurable, repeatable, with feedback from the performance of hired people.
If you want to rebuild hiring for the LLM era — from interview structure to data contours and assessment automation — I invite you to discuss the task with Nahornyi AI Lab. Write to me, Vadym Nahornyi, and I will propose a practical scheme that protects hiring quality without turning the process into an interrogation.