🎉
Getting too many applicants? Try Truffle's AI-powered one-way interviews for free here!
🎉
Exciting news! We just launched 50+ new integrations!
🎉
Exciting news! We just launched 50+ new integrations!
🎉
Getting too many applicants? Try Truffle's one-way interviews for free here!
AI recruiting & automation

AI tools that detect dishonesty in video interviews

Worried your next remote candidate is a deepfake or just a polished storyteller? We break down the newest AI tools that spot identity spoofing and real-time fibbing before a bad hire slips through.
Published on:
May 13, 2025
Updated on:
May 13, 2025

We moved first-round interviews online to save time and widen the talent pool. Fraudsters moved with us. Reports of job-related scams almost tripled between 2020 and 2024, with losses soaring from $90 million to $500 million, much of it tied to AI-generated personas in video interviews.

Dishonesty now appears in three main guises:

  1. Identity spoofing: A deepfake or stand-in answers for the real applicant.
  2. AI-enhanced content: Large language models generate or polish real-time answers.
  3. Classic lying: Embellishing experience or hiding disqualifying facts.

Traditional interview techniques can’t keep pace with these threats, but several AI products have emerged to plug the gap. Below are the five we see gaining momentum in 2025, plus the legal guard-rails you’ll need.

1. EyeDetect by Converus

EyeDetect tracks subtle changes in pupil dilation and fixation while candidates answer onscreen questions. Independent studies place its accuracy at 86–88 percent for pre-employment screening that takes about 30 minutes, with no sensors attached.

Why it matters: Fast, non-intrusive and already used by U.S. federal agencies. Converus claims an 88 percent hit rate for integrity testing, outscoring conventional polygraphs.

Watch-outs: Requires a controlled lighting set-up and high-resolution webcam; results may be inadmissible in some jurisdictions.

2. Layered Voice Analysis (LVA)

Israel-based Nemesysco’s LVA algorithms parse vocal micro-tremors that correlate with cognitive load and stress. The LVA 6.50 suite flags emotional spikes in any audio stream, while LVA-i HR packages the tech for remote hiring workflows.

Why it matters: Works over an ordinary video-platform recording; no extra hardware. The dashboard highlights “risk zones” in real time so interviewers can probe further.

Watch-outs: Voice-stress evidence is controversial and, in some states, classified as a “lie detector” subject to restrictive labour laws (see compliance notes below).

3. HireVue Integrity Features

HireVue’s platform no longer reads facial expressions, but its integrity toolkit still spots deception in answers:

  • Similarity scoring for copied code or text
  • Browser-tab change alerts and screen-capture trails
  • Multi-stage validation that compares live answers to earlier recordings

HireVue reports that fewer than 1 percent of candidates are flagged for shared responses, yet those individuals score markedly lower overall – a built-in disincentive to cheat.

Why it matters: Many enterprises already license HireVue; activating the anti-cheat layer adds minimal friction.

Watch-outs: HireVue stresses it evaluates language only, not biometrics. A posture adopted after halting facial-analysis scoring in 2021 under external pressure.

4. Reality Defender

Reality Defender runs as a plug-in for Zoom (beta) and raises a prompt when it detects synthetic artefacts in the video stream – from subtle compression anomalies to mismatched eye blinks. In demos, the tool successfully flagged a live deepfake of Elon Musk within seconds.

Why it matters: Identity fraud is the fastest-growing vector in remote hiring. Automatic deepfake checks let us verify that the person answering is the person we screened.

Watch-outs: Heavy GPU use; organisations may need to route calls through Reality Defender’s cloud for full functionality.

5. GetReal Labs

Co-founded by deep-fake scholar Hany Farid, GetReal Labs analyses facial landmarks, lighting inconsistencies and metadata across every frame. The platform is built for live calls rather than post-call audits, giving recruiters instant confidence (or warnings) before offers go out.

Why it matters: Prevents onboarding costs wasted on fraudulent hires; also helps protect against bonus-hunting “job-phishers” who disappear after the first paycheck.

Watch-outs: Still in early-access; pricing is skewed toward large enterprises.

Compliance, ethics and the lawsuit nobody wants

Last year a class action accused CVS of violating Massachusetts’ ban on lie detectors by using AI that boasted of “scaleable lie detection.” Similar bans exist in Maryland and New York, and the U.S. Employee Polygraph Protection Act could be interpreted to cover voice-stress or eye-tracking tech.

Key checkpoints before you deploy:

  • Obtain explicit consent and explain what signals you collect.
  • Run validity studies to show job-relatedness and mitigate bias.
  • Offer accommodations for candidates with disabilities or low-bandwidth connections.
  • Keep a human in the loop for any adverse decision.

Implementation playbook

  1. Define dishonesty upfront. Is your priority identity verification, content originality or psychological truthfulness? Different goals call for different tools.
  2. Layer signals. Combine an identity-focused product (Reality Defender or GetReal) with a behaviour-focused one (EyeDetect or LVA) for higher confidence.
  3. Pilot, don’t plunge. Start with bottleneck roles where mis-hires are expensive; collect outcome data on turnover, performance and candidate feedback.
  4. Educate recruiters. AI flags are hypotheses, not verdicts. Train hiring managers on follow-up questioning that gives candidates a fair chance to clarify.
  5. Review every six months. Deepfake and LLM capabilities evolve weekly; your defences should, too.

Popular AI tools detecting dishonest responses in video interviews

As generative AI makes it cheaper to fake identity, credentials and even real-time answers, the cost of not validating honesty rises. Early adopters who blend specialised AI detectors with transparent human oversight will cut mis-hire rates, avoid regulatory landmines and – crucially – preserve trust in an increasingly virtual hiring funnel. The technology is imperfect, but the alternative is making multi-year salary bets in the dark.