Hiring teams are using one way video interviews to move faster without sacrificing quality. If you are new to the format, this guide shows exactly how to evaluate responses with fairness, consistency, and speed.
What a one way video interview is and why it matters
A one way video interview asks candidates to record answers to a fixed set of questions on their own time. There is no live interviewer. Your team reviews later, each on their own schedule.
This format is also called on demand or asynchronous interviewing. It removes scheduling overhead, gives every candidate the same prompt, and makes early screening repeatable.
Benefits and drawbacks for busy teams
Why teams adopt it
- Time saved since you review on your schedule, not the candidate’s
- Consistent prompts so every candidate gets a fair shot at the same questions
- Parallel review since multiple reviewers can score independently
- Candidate flexibility which typically boosts completion rates
Tradeoffs to manage
- Less rapport than a live conversation
- Tech hiccups that can affect audio or video quality
- Harder to probe follow ups, so you must design questions carefully
A step by step framework that scales
Use this repeatable flow to evaluate one way interviews across roles and hiring cycles.
1. Define role competencies before you press play
List 5 to 7 competencies tied to outcomes for the role. Think “handles 30 inbound tickets per day with 95 percent CSAT” not “good communicator.” Anchor each competency to observable behaviors.
2. Standardize questions to those competencies
Map every question to at least one competency. Remove any question that does not map. Keep 3 to 5 questions for most roles, 6 to 8 for internships and early career.
3. Configure the interview for signal
Set time per answer, minimum answer length, and retake limits. Fewer retakes create a more realistic signal. Offer audio-only for roles where video presence is not essential.
4. Score independently with a rubric
Each reviewer scores on a shared scorecard without discussing first. Use a 1 to 4 or 1 to 5 scale with behavioral anchors for each level. Avoid free-form “vibes” notes until after numeric scoring.
5. Aggregate scores and resolve outliers
Average by competency and overall. If one reviewer’s score is two levels away from the group, discuss why and rewatch the exact timestamp before changing a score.
6. Decide, document, and move
Advance candidates who meet your pre-set thresholds. Record a one sentence rationale that cites specific evidence. Send timely outcomes to everyone.
What to include in your scorecard
Use concise criteria with behavioral examples so any trained reviewer can apply the same bar.
Communication clarity
How clearly the candidate structures answers and lands a point
- 5: Structure stated up front, concise, easy to follow
- 3: Understandable but meanders, unclear close
- 1: Disorganized, main point missing
Job knowledge or technical fit
Evidence they can perform core tasks, not just claim them
- 5: Cites relevant tools, metrics, and past outcomes
- 3: High level familiarity without specifics
- 1: Confuses basic concepts
Professionalism and presence
Prepared setting, respectful tone, steady delivery
- 5: Professional setting, engaged, steady pace
- 3: Minor distractions, occasional filler
- 1: Distracting environment, inattentive
Values alignment
Signals that match how your team works day to day
- 5: Concrete examples that mirror your cultural markers
- 3: General statements with limited evidence
- 1: Clear misalignment with stated norms
Overall recommendation
Would you advance this person based on evidence in this interview
- Yes, Neutral, No, with a one line reason
How to combine scores and make faster decisions
- Collect all reviewer scorecards first. No discussion.
- Calculate the average and median per competency and overall. For small panels, the median reduces the impact of one generous or strict reviewer.
- Use thresholds you set in advance, for example: average 3.5 or higher advances, any competency below 2 requires discussion.
- Escalate only outliers where spread is wide, then revisit specific timestamps.
- Batch decisions twice a week so candidates hear back quickly.
Bias busters that actually work
- Structured rubrics only and require evidence tied to behaviors, not background
- Multiple independent raters with different perspectives reviewing separately first
- Hidden identifiers on first pass where feasible, such as masking names on the reviewer screen
- Regular calibration every 10 to 15 candidates to compare how scores are being applied and adjust anchors
Red flags that consistently predict mis-hiring
- Background distractions that show lack of preparation
- Reading a script word for word rather than speaking naturally
- Off-prompt answers or generic claims without examples
- Unresolved tech issues after clear instructions
- Missed deadlines for submission without a strong reason
Speed tactics without losing quality
- Batch by question so you compare like with like and spot patterns fast
- Skim at higher speed for first pass, then rewatch finalists at normal speed
- Timestamp standout moments in your notes so reviewers can jump straight there
- Daily caps per reviewer to avoid fatigue and score drift
Truffle tips to operationalize this
- Use qualification questions to filter out must-haves before the interview begins
- Limit retakes to keep answers authentic while reducing candidate anxiety
- Offer audio only where video does not add predictive value
- Lean on AI summaries and match scores to triage which candidates deserve your time first
- Keep transcripts attached to every answer so evidence is one click away
Wrap up and next steps
Evaluating one way interviews is simple once your bar is explicit, your questions are mapped to outcomes, and your reviewers follow the same scoring playbook. The payoff is faster shortlists, cleaner documentation, and fewer “nice interview, wrong hire” moments.
If you want this workflow out of the box, Truffle gives you: standardized question banks with AI scoring, audio or video responses, configurable retakes and timing, qualification knockouts, match percentages, and per-question transcripts and summaries. Spin up your first interview in minutes and start advancing only the right candidates.
Try Truffle free for 7 days and see how much time you get back.
Frequently asked questions
Can candidates re-record answers
Yes. You choose the retake limit per question. Many teams allow one retake to balance fairness with authenticity.
How long should each answer be
One to three minutes per question is the sweet spot. Set a minimum answer length to avoid five second responses.
How many questions work best
Three to five for most roles. Up to eight for internships and graduate programs where you need more signal.
What is a fair passing score
Set minimums per competency, not just an overall score. For example, overall 3.5 average with no competency below 3. Calibrate every 10 to 15 candidates.
What if a candidate has tech issues
Provide a short device check and practice prompt. If issues persist and affect fairness, allow a single resubmission window.