Candidate screening

How to assess soft skills in 2026

The classic playbook for soft skills assessment, behavioral questions and STAR, broke the moment ChatGPT could write a clean answer in seven seconds. Here's what's left: the methods that still hold up, a rubric that survives the handoff to a hiring manager, and prompts candidates have to perform in the moment instead of recite.
April 16, 2026
Table of contents

    The TL;DR

    Phone screens measure rapport, not skill. Behavioral and STAR questions are now trivially coachable. Only scenario-based performance prompts reliably surface real soft skills.
    Score every candidate against the same four-dimension rubric (clarity, concreteness, listening, warmth) so the rating is defensible to a hiring manager, not a gut read.
    Three to five recorded prompts, 90 seconds each, reviewed at 1.25x — review 20 candidates in the time it used to take to schedule three phone screens.

    A friend of mine runs a small tutoring agency. Last spring she told me she'd hired three tutors in a row who looked incredible on paper and then, two weeks in, turned out to be the kind of humans who say "um" on camera 40 times in a 30-minute session. One of them apparently started a class by telling a nine-year-old that algebra is "honestly kind of pointless." The kid's mother was, understandably, not thrilled.

    Her process at that point was simple: resume, a 20-minute Zoom call, references, offer.

    Except the Zoom calls were with her, a calm and curious person, while the actual job was with a fidgety sixth grader whose mom was listening from the kitchen.

    The skill she was hiring for was "be patient and clear with a stranger who doesn't want to be there." Her interview measured "be charming with someone who already likes you."

    That gap is the whole problem with how to assess soft skills in hiring. The signals you actually care about, patience, clarity, listening, warmth, don't show up on resumes. They barely show up in a friendly chat. And in 2026 they don't show up in a polished STAR answer either, because any candidate with five minutes and a tab open to ChatGPT can write one.

    This post is about what's left when you take those tools off the table.

    The soft skills worth assessing

    Before you pick a method, name the skills. "Good communication" is a vibe. These are the eight that actually matter for most client-facing, customer-facing, and teaching roles:

    • Clarity: Organizing a thought before saying it.
    • Listening: Responding to what was actually asked, not a rehearsed adjacent point.
    • Warmth: Sounding like a human a stranger would want to talk to on a hard day.
    • Patience: Holding tone when the other person isn't getting it.
    • Adaptability: Changing approach when the first one isn't working.
    • Accountability: Naming the part you got wrong without burying it.
    • Conflict navigation: Saying no, delivering bad news, holding a line without escalating.
    • Curiosity: Asking the next question instead of waiting for the next instruction.

    Pick three to four for any given role. More than that and your rubric loses its edge.

    Why the standard methods stopped working

    The methods most teams reach for were built for a world before AI-coached candidates and remote-first work. Here's the short tour.

    Phone screens

    They feel like they're measuring communication. They're mostly measuring whether you and the candidate have similar conversational rhythms. Warm, affable people who can't actually handle a tough customer keep making it to the offer stage because phone screens are a rapport filter, not a skill filter.

    Resume keywords

    Resume keywords are aspirational. "Strong written and verbal communication" and "team player" appear on every resume ever written. Reading about communication is like reading a transcript of a concert.

    Behavioral questions and the STAR method

    STAR used to be the gold standard. Ask a candidate to describe a time they handled X and listen for structure. The problem in 2026 is that STAR is the easiest thing in the world to coach. Candidates rehearse five clean STAR stories and rotate them. ChatGPT writes the rest. If you're worried about candidates using AI to apply, behavioral questions in their classic form are where the bottom falls out first.

    Psychometric and personality tests

    These measure traits, not behavior. They're useful as one input, especially for adaptability and conscientiousness, but they tell you who someone tends to be, not who they are with a frustrated parent on the line. Treat them as a supplement, not the spine. Truffle's personality testing software post breaks down where they fit.

    • Role play and live simulations work but burn time. They scale poorly past a final round.
    • Reference checks still help. Past managers will tell you, sometimes obliquely, how someone handled conflict and pressure. Use them as a tiebreaker, not a primary signal.
    • Paid trial projects are the most accurate signal you can get, full stop. They're also the highest-effort, hardest to schedule, and impossible to run on 40 candidates. Save them for the last two standing.

    That leaves a gap in the middle of the funnel. Something repeatable, something hard to fake, something you can run on 30 candidates without losing your week. That's where structured one-way interviews come in.

    The new shape of a soft skills assessment

    Move the performance earlier. Give every candidate the same short set of recorded video prompts. Score them against a written rubric. Watch the responses on your own time.

    It's a structured one-way interview instead of a live one, which means you can review 20 people in the time it used to take you to schedule three calls. Every candidate faces the exact same prompts, which is the only way "structured" actually means anything. And because the prompts are scenarios, not behavioral lookbacks, a rehearsed STAR story doesn't help.

    You give up the small talk at the start of a phone call. The fix is to be clear in the invitation that this is a structured first step, that a human will absolutely talk to the finalists, and that the format exists so you're not asking 40 people to block 20 minutes on their calendar. Most candidates prefer it once they try it, which is part of why one-way interviews keep growing in remote hiring.

    A four-dimension rubric

    Before you write a single prompt, write the rubric. If you don't know what you're looking for, the video is just footage. A shared interview scorecard is what turns a gut read into a defensible rating, and "defensible" is the word that matters. The hiring manager has to be able to look at the rating and trust it.

    Score each dimension 1 to 5:

    1. Clarity. Can you follow what they're saying on the first pass? Are their sentences organized? Do they answer the question that was asked, or do they drift?
    2. Concreteness. Do they give specific examples, names, numbers, and situations? Or do they speak in generic phrases that could come from anyone's resume, or any chatbot?
    3. Listening cues. Do they reference the prompt they were asked? Do they acknowledge the scenario, or do they pivot to a rehearsed monologue?
    4. Warmth and tone. Would you want this person on the other end of your hardest day at work?

    A 5 in every dimension is a finalist. A 4 in three dimensions is a strong review. Anything averaging below 3 is a no. Same rubric, every candidate, same role. That's the part that holds up later when someone asks why this person made it through and that one didn't. Structured interviews aren't a recruiter trend, they're the format with the strongest bias-reduction evidence behind them.

    Sample prompts that are hard to fake

    Each prompt below is built to surface the four dimensions without being a trick question. Keep response time to 90 seconds with 30 seconds of thinking time. Long enough to think, short enough to force them to choose their words instead of editing them.

    Client-facing roles (freelancers, customer success, account managers)

    • "A client you've been working with for two months writes back frustrated with a deliverable. They say it's not what they asked for. Walk me through what you'd say in your next reply."
    • "You're on a call and you realize halfway through that you misunderstood what the client wanted. What do you do next?"
    • "Tell me about a time you had to say no to a client request. What did you say, and how did they respond?"

    Tutors, teachers, and classroom readiness

    • "A student says 'I just don't get this and I don't think I ever will.' What's the first thing you say?"
    • "Describe how you'd explain compound interest to a 12-year-old who is already frustrated with the class."
    • "Tell me about a student who was difficult to reach. What did you try, and what worked?"

    Customer service and retail leads

    • "A customer is upset about something that isn't your fault. Talk me through the first 30 seconds of that conversation."
    • "You have to deliver bad news to a customer who has been waiting for an update. What do you say?"

    Notice what's in these prompts: a scenario, a specific trigger, and a clear thing to deliver. Notice what's not there: "tell me about a time you showed great communication skills." That's the kind of abstract behavioral prompt candidates have been coached to answer, and that ChatGPT will write a tidy STAR response to in seven seconds. Replace it with a scenario they have to perform, not describe.

    For more, our situational interview questions catalog goes deeper by role type.

    How to evaluate without losing your week

    If you watch every full response from every candidate, you'll save nothing. Build the evaluation flow in the same place you run the interview.

    Watch the first 30 seconds at normal speed. If the candidate loses you in the first 30 seconds of a one-minute answer, they'd lose a client too. If they hold your attention, watch the rest at 1.25x and score the rubric.

    Leave a one-line note per candidate so you can defend the rating later. This is how you evaluate candidates from one-way interviews without re-watching anything.

    Share the strongest three with the hiring manager using a read-only link so they can review on their phone between meetings. The goal is one live round with the top three, not five live rounds with twelve people.

    How Truffle helps

    Truffle is a candidate screening platform that combines one-way video interviews, resume screening, and talent assessments. For soft skills work, the one-way interviews carry most of the weight. You write the prompts once, send one share link, and every candidate records against the same questions on their own time.

    Once responses come in, AI Summaries give you a quick overview of what each candidate said and how they said it. AI Match scores each response against the criteria you set in intake, so the candidates who most closely align with the role are already at the top of the list. The AI surfaces. You still make the call.

    Candidate Shorts compile the three most revealing moments from each interview into a 30-second clip, so you can see how someone shows up on camera without watching the whole recording. That's the artifact that survives the handoff to the hiring manager. They watch 30 seconds and they're aligned with you, instead of asking you to defend a number.

    This pattern shows up at both ends of Truffle's customer base. Agencies like FreeUp, hiring client-facing freelancers, need to know whether a designer or VA can handle a tough client without help. Teams like JobsConnect, staffing tutors and teachers, need to know whether a candidate is ready to walk into a classroom and hold it. Resumes can't tell you that. A 15-minute phone screen can't either. Three recorded prompts and a shared rubric can.

    If you want to see the full flow, the one-way interview tips guide walks through it end to end, and our candidate screening software page shows how the pieces connect.

    Truffle starts at $149 a month on Self-Serve, with a 7-day free trial and no credit card required. You can run your first structured soft skills assessment the same day you sign up.

    Frequently asked questions on soft skill assessment

    What's the best way to assess soft skills in hiring?

    Use a short, structured set of scenario-based video prompts. Score the responses against a written rubric. Review on your own time. Scenario prompts surface real behavior, the rubric keeps every candidate on the same bar, and async review saves the hours phone screens used to eat.

    Does the STAR method still work?

    Partially. STAR is still a useful structure for answering a question, but it has stopped being a useful filter for hiring. Candidates rehearse STAR stories and AI tools polish them in seconds. Use scenario prompts that ask candidates to perform in the moment instead of describing a past story.

    Can you really assess communication skills remotely?

    Yes, and often better than in person, because you're watching candidates perform in the exact medium they'll work in. If the role is remote and client-facing, a recorded soft skills assessment is a job sample, not a compromise.

    How many prompts should a soft skills interview include?

    Three to five. Fewer than three and you don't have enough signal. More than five and completion rates drop and review time balloons. Keep each response to 90 seconds and expect the full interview to take candidates around 10 minutes.

    How do you make a soft skills rating defensible to a hiring manager?

    Three things. A written rubric they've seen before the interview runs. The same prompts and the same rubric for every candidate in the role. A short clip they can watch themselves, so the rating isn't the only artifact they're trusting.

    Sean Griffith
    Sean began his career in leadership at Best Buy Canada before scaling SimpleTexting from $1MM to $40MM ARR. As COO at Sinch, he led 750+ people and $300MM ARR. A marathoner and sun-chaser, he thrives on big challenges.
    Author
    You posted a role and got 426 applicants. Now what — read all of their resumes and phone screen 15 of them?

    Try Truffle's applicant screening software instead.
    Start free trial