A recruiter at a mid-size software company recently told us about a candidate who aced a video interview for a senior engineering role. Great communication. Sharp technical answers. Strong portfolio. Two weeks after onboarding, the person who showed up couldn't write a basic SQL query. A different person had interviewed on their behalf.
This isn't an edge case anymore. According to Gartner, by 2028, 1 in 4 job candidates globally will be fake. And the problem isn't waiting until 2028 to arrive. Deepfakes, AI-generated resumes, proxy interviewers, and identity fraud are already in your pipeline. If you're responsible for hiring, you need ways to identify fake candidates before they cost you time, money, or worse.
This guide covers what fake candidates actually look like, why the problem is accelerating, and nine specific detection methods you can use across live, phone, and async.
What are fake candidates?
Fake candidates are people who misrepresent their identity, qualifications, or both during the hiring process. The term covers a range of fraud types, and they're getting harder to distinguish from legitimate applications.
The most common forms: identity fraud (using someone else's name, credentials, or work history), AI-assisted impersonation (deepfake video or voice cloning during interviews), bait-and-switch schemes (one person interviews, a different person shows up for work), and AI-generated applications (resumes, cover letters, and screening answers produced entirely by language models with no real experience behind them).
The sophistication varies. Some are clumsy. Mismatched timelines, stock photo headshots, LinkedIn profiles with 12 connections. Others are polished enough to pass multiple interview rounds with experienced hiring teams. The clumsy ones waste your time. The polished ones can compromise your data, your systems, and your team's trust.
Why fake candidate fraud is increasing
Three forces are converging to make this problem worse every quarter.
Why remote hiring increases fraud risk
Remote interviews removed the one verification step that used to happen automatically: meeting someone in person. When every interaction is mediated through a screen, the opportunity to verify that the person you're talking to is the person they claim to be shrinks dramatically.
This doesn't mean remote hiring is the problem. It means remote hiring requires different verification methods than the ones most teams inherited from in-person processes. The companies getting burned are the ones that moved interviews online without updating their fraud detection approach.
Deepfakes and AI voice cloning
Cybersecurity firm Palo Alto Networks found that someone with no image manipulation experience can create a convincing deepfake in about 70 minutes. The tools are free or cheap, and they're improving fast.
Deepfakes superimpose one person's face onto another in real time during a video call. Voice cloning replicates someone's speech patterns from a few minutes of sample audio. Combined, these tools let a proxy interviewer appear and sound like the person on the resume.
The artifacts are still detectable if you know what to look for (more on that in the detection methods below). But the window where deepfakes were obviously fake is closing. Teams that rely on "we'd notice if something was off" are already behind.
Financial and security risks of hiring imposters
The consequences go beyond a bad hire. When an imposter gains access to your systems, the damage can include wasted recruiting, onboarding, and training costs (typically $15,000-$30,000 per hire before the fraud is discovered), unauthorized access to sensitive company data and customer information, malware installation or data exfiltration once they're inside your network, and operational disruption from rehiring and retraining when the fraud surfaces.
The FBI warned in January 2025 about North Korean IT workers infiltrating U.S. companies through fake identities, generating $6.8 million in overseas revenue by being hired at over 300 American companies. This isn't a hypothetical risk. It's an active, well-funded fraud operation.
9 ways to detect fake candidates during interviews
These detection methods work across live video, phone, and asynchronous interviews. No single method catches everything. The most effective approach layers several of these together.
1. Spot AI-generated resumes and cover letters
AI-generated applications have a few consistent tells. Look for generic phrasing that could apply to any company or role, perfect formatting with zero human quirks (no typos, no inconsistent spacing, no personality), and details that don't add up across the resume, cover letter, and LinkedIn profile.
A resume where every bullet starts with a power verb and follows the exact same structure? That's a template at best, a language model at worst. Cross-reference claimed employers, dates, and titles against LinkedIn and public records. Inconsistencies between what the resume says and what you can verify externally are worth flagging.
AI-generated content isn't an automatic disqualifier. Plenty of real candidates use AI to polish their materials. The red flag is when the content is entirely generated with no authentic detail underneath. Truffle's AI Check feature flags responses that show patterns of AI assistance, giving you a context signal to dig deeper rather than an automatic rejection.
2. Watch for visual red flags in video interviews
Deepfake technology has gotten good, but it still struggles with specific visual scenarios. Knowing what to watch for gives you an edge.
- Unnatural facial movements. Lip-sync delays where the mouth doesn't quite match the audio. Blurring or flickering around the edges of the face, especially the jawline and hairline. Expressions that seem slightly off, like a smile that engages the mouth but not the eyes.
- Lighting inconsistencies. Shadows on the face that don't match head movement. Skin tone that shifts when the person turns. Lighting on the face that doesn't match the lighting in the room.
- Camera avoidance. Reluctance to adjust the camera angle, turn their head to the side, or show their full face from different angles. Deepfakes struggle with profile views, so a candidate who resists any movement is worth noting.
One-way video interviews give you an advantage here. Because the recording is permanent, you can rewatch clips, slow down footage, and catch inconsistencies that fly by in a live call.
3. Use prescreen questions to filter suspicious applicants
Structured prescreen questions create a baseline you can verify later. The key is asking questions that require specific, experience-based answers rather than generic knowledge someone could look up or generate with AI.
"Tell us about a specific project where you had to make a tradeoff between speed and quality. What did you choose and why?" is harder to fake than "What are your strengths?" A candidate with real experience will reference specific tools, team dynamics, and outcomes. A fake candidate will give you polished generalities.
Candidate screening platforms that auto-generate role-specific questions from your position description help here. The questions are tailored to the role, consistent across every candidate, and create a structured record you can compare. When one candidate's answers sound dramatically different in specificity from everyone else's, that's a signal.
4. Listen for audio inconsistencies and voice cloning
Voice cloning has improved rapidly, but detectable artifacts remain. Listen for a robotic or flat tone that lacks the natural variation of human speech (pauses, filler words, slight pitch changes), audio delays that don't match the video latency (the voice arrives a beat after the lips move, or vice versa), inconsistent background noise (audio that sounds like a studio recording in a room that clearly isn't one), and sudden quality shifts mid-conversation (the voice sounds different after a "connection drop").
Ask the candidate to spell something out loud, count backward from an unusual number, or repeat a phrase you make up on the spot. These unscripted audio moments are harder for cloning tools to handle smoothly.
5. Run real-time verification tests
These are specific, low-friction tests you can run mid-interview to verify identity.
- ID hold-up test. Ask the candidate to hold a government-issued ID next to their face on camera. Compare the photo to the person you're seeing. This takes 10 seconds and catches the most basic identity fraud.
- Environment check. Request a quick pan of their workspace. A genuine candidate in their home office will do this without hesitation. Someone running a proxy setup or using a virtual background to hide their real environment may resist or have a suspiciously generic backdrop.
- Spontaneous questions. Ask something unexpected that requires an immediate, unscripted response. "What's the weather like where you are right now?" or "What's the closest restaurant to your office?" A real person answers instantly. Someone in a different location or timezone stumbles.
These checks feel natural when framed correctly. "Before we start, I just need to do a quick ID verification. It's standard for all candidates." Most legitimate candidates won't think twice.
6. Request role-specific work samples
Work samples with context reveal genuine experience in ways that resumes and interview answers can't. Asking a candidate to walk through a project they worked on, explaining why they made specific decisions, what they'd do differently, and what constraints they faced, is far more revealing than reviewing a submitted portfolio alone.
The walkthrough matters more than the sample itself. A fake candidate might submit someone else's work. But explaining the reasoning behind the decisions, fielding follow-up questions about alternatives they considered, and discussing what went wrong requires lived experience that's hard to fabricate.
For technical roles, a short live task (20-30 minutes of pair programming, a design critique, or a data analysis exercise) adds another verification layer.
7. Check for technical environment anomalies
Pay attention to the candidate's technical setup during the interview. Several red flags can indicate deception.
- Virtual backgrounds hiding surroundings. Not inherently suspicious (plenty of legitimate candidates use them), but combined with other signals, they may indicate a proxy setup. If you ask them to remove the virtual background and they can't or won't, note it.
- Multiple audio sources. If you hear faint typing that doesn't match the candidate's hand movements, whispered voices in the background, or audio that occasionally sounds like it's coming from a different microphone, someone else may be feeding answers.
- Screen share reluctance. For roles that involve technical work, asking a candidate to share their screen during a task is a reasonable request. Unwillingness to do so, especially combined with excuses about "company policy" on a personal device, warrants scrutiny.
8. Verify identity and location before offers
Before extending an offer, take verification steps that go beyond the interview itself. Cross-reference their stated timezone with the location on their resume (if they say they're in Austin but their calendar availability suggests UTC+5, that's a problem). Verify that the email, phone number, and mailing address on the application are consistent with public records. Request a brief video call from their stated location if you have concerns.
For remote roles, this step is especially important because you may never meet the person in person. One verification call from their claimed location, with their ID visible, eliminates the most common bait-and-switch scenarios.
9. Conduct background checks at offer stage
Background checks verify what the interview process can't: actual employment history, credential validity, and identity confirmation through official records.
Run checks at the offer stage rather than earlier. This keeps the process lean for most candidates and reserves the heavier verification for finalists you're serious about. Verify employment dates and titles with previous employers. Confirm educational credentials for roles where they matter. Check for identity mismatches between the application and official records.
Background check services like Checkr, GoodHire, and TurboCheck each handle different aspects of verification. TurboCheck specifically focuses on detecting fake identities using email and phone data. Pick the level of verification that matches your risk tolerance and the sensitivity of the role.
Common red flags that reveal fake candidates
A quick-reference list for your hiring team.
Immediate disqualifiers
Refusal to show ID or turn on their camera after being asked. A different person appearing in a follow-up interview than the one in the first round. Inability to answer basic questions about their own resume (dates of employment, manager names, specific projects listed).
Subtle warning signs
Overly rehearsed answers with no natural pauses, filler words, or self-corrections. Vague responses when asked to elaborate on specific projects ("I worked on a data migration" but can't name the source system, the target, or the team size). Eye movement that suggests reading from a script off-screen, especially during technical questions.
Geographic and timezone inconsistencies
A stated location that doesn't match the candidate's IP address or timezone behavior. Confusion about their own timezone during scheduling ("Wait, what time is it there?"). Background details visible on camera that are inconsistent with their claimed location (architecture, weather through a window, signage in a language that doesn't match).
Best resume screening software to catch fraud
Several tool categories help detect fake candidates at different stages of the pipeline.
AI-powered screening tools
AI screening tools analyze applications for inconsistencies, AI-generated content, and qualification mismatches. Async video interview platforms like Truffle include built-in AI analysis that flags suspicious patterns in candidate responses. Truffle is a candidate screening platform that combines one-way video interviews with resume screening and talent assessments. The AI Check feature surfaces when responses show signs of AI assistance, giving you a transparency signal rather than an opaque score.
Identity verification platforms
Dedicated identity verification tools cross-reference applicant data (email, phone number, online presence) against public records to confirm identity. TurboCheck focuses specifically on recruiting fraud. These tools work best at the pre-interview or pre-offer stage, adding a verification layer without disrupting the candidate experience.
ATS integrations for fraud detection
Fraud detection tools can plug into your existing Applicant Tracking System through native integrations, Zapier, or API connections. This means fraud signals appear in your existing workflow rather than requiring a separate dashboard.
How to build a fraud-resistant interview process
Detection works better as a system than as a collection of one-off checks. Here's how to structure it across your hiring workflow.
Pre-interview verification steps
Verify the email domain matches the candidate's stated current employer (for experienced hires). Cross-reference their LinkedIn profile with resume details for consistency in titles, dates, and employers. Send a calendar invite with video requirements clearly stated so candidates know camera-on is expected.
These steps take minutes per candidate and filter the most obvious fraud before you invest interview time.
During-interview detection protocols
Standardized interview protocols make fraud harder to execute and easier to detect. When every candidate answers the same questions in the same format, anomalies stand out. The candidate who gives polished answers to standard questions but fumbles on spontaneous follow-ups is worth a closer look.
Structured interviews also create a comparison baseline. If nine candidates give specific, detailed answers about their project experience and one gives only generalities, that contrast is data. Video interview platforms that record responses make this comparison possible even when interviews happen days apart.
Post-interview validation checks
Before advancing a finalist, run validation. Reference checks with specific questions about projects the candidate mentioned in the interview ("They said they led the CRM migration at your company. Can you tell me about their role?") catch candidates who claimed someone else's work. Credential verification for required certifications or licenses closes another gap.
These checks don't need to be heavy. A 10-minute reference call with one targeted question is more useful than a generic "Would you hire them again?" conversation.
Legal and ethical considerations for fake candidate detection
Fraud detection is important, but it has to stay within legal and ethical boundaries.
Privacy and consent requirements
Candidates must be informed about any recording, identity verification, or background checks before they happen. This isn't just good practice. It's legally required in most jurisdictions. GDPR in Europe, various state privacy laws in the U.S. (California's CCPA, Illinois' BIPA), and Canada's PIPEDA all govern how you can collect and process candidate data.
If you're recording interviews, get consent before you start. If you're running identity verification, disclose it in your process. Transparency about your verification steps actually deters fraudulent candidates from applying in the first place.
Avoiding discrimination in fraud detection
Detection methods must focus on job-relevant signals, not protected characteristics. A candidate's accent, appearance, or location should never be used as a fraud indicator on its own. The question is whether their claimed identity, qualifications, and responses are consistent and verifiable, not whether they look or sound a certain way.
AI tools used in fraud detection should be evaluated for bias. Truffle's analysis focuses on response content and consistency against employer-defined criteria. It excludes demographic and appearance cues from its scoring, so the fraud signals are based on what was said, not who said it.
How video interview software prevents fake candidate fraud
Async video interviews create a recorded, reviewable record that's harder to fake than a phone screen and easier to analyze than a live call.
- Full transcripts. Every word a candidate says is transcribed. You can search for inconsistencies, compare answers across candidates, and revisit specific moments without relying on memory. If a candidate claims five years of Kubernetes experience but uses terminology incorrectly across multiple answers, the transcript makes that visible.
- AI analysis. Candidate screening tools that include AI analysis automatically flag response patterns that a human reviewer might miss in a single pass, like answers that closely match publicly available content or responses that lack the specificity you'd expect from someone with the claimed experience level.
- Candidate Shorts. Truffle's 30-second highlight reels surface the most revealing moments from each interview. When you're reviewing 50 candidates, watching 30 seconds per person is realistic. Watching 15 minutes per person isn't. The compressed format makes it faster to spot visual anomalies, inconsistent behavior, or answers that don't match the resume.
- Team collaboration. Multiple reviewers screening the same candidate catch red flags that one person might miss. One interviewer notices the audio lag. Another flags the vague project descriptions. A third recognizes that the "portfolio" matches a public GitHub repo. Shared review surfaces patterns that individual review buries.
FAQs about identifying fake candidates
Can recruiters rely on LinkedIn profiles to verify candidate identity?
No. LinkedIn profiles can be fabricated or embellished with minimal effort. Stock headshots, inflated titles, and copied work histories are common. Treat LinkedIn as one data point, not proof. Cross-reference the profile against the resume, application details, and what the candidate says in the interview. Inconsistencies between these sources are a stronger signal than any single source alone.
What should hiring teams do if they suspect a candidate is fake mid-interview?
Continue the interview professionally. Don't accuse or confront. Document the specific red flags you observed (audio lag, inability to answer resume questions, visual artifacts) and share them with your hiring team afterward. Run additional verification steps before making any decisions. If the red flags are serious (a different person appears, they refuse basic ID verification), it's reasonable to end the interview early with a neutral explanation like "We'll be in touch about next steps."
How do asynchronous video interviews help detect fake candidates?
Async interviews create permanent recordings that can be rewatched, slowed down, and compared across candidates. Unlike a live call where suspicious moments pass in real time, a recorded response can be reviewed by multiple team members, analyzed by AI for consistency, and preserved as evidence if needed. The structured format (same questions for every candidate) also makes anomalies easier to spot because you have a direct comparison baseline.
What is the typical cost of fake candidate detection tools?
Costs range widely. Video interview platforms like Truffle ($99/month for the self-serve plan) include AI analysis and fraud signals as part of the screening workflow. Dedicated identity verification services like TurboCheck charge per-candidate fees. Background check services typically run $30-$100 per check depending on scope. For most teams, the highest-ROI approach is building light checks into your existing interview process (prescreen questions, structured video, verification steps) before adding specialized tools. The cost of not catching a fake candidate (wasted salary, security risk, rehiring) almost always exceeds the cost of prevention.




