New agents who struggle to understand call quality standards are usually dealing with one of two problems: the standards are too abstract to apply in practice, or the feedback loop between observed performance and coaching is too slow to build clarity. A structured call review process fixes both by giving new agents concrete examples of what quality looks like and by shortening the time between a call happening and a coach explaining it.
This guide covers how to build a call review process specifically designed for new agent onboarding, including how to handle agents who aren't connecting to quality standards in the first weeks.
Why New Agents Struggle with Call Quality Standards
Call quality standards written as policies or bullet points in an onboarding manual rarely transfer to live call behavior. An agent can read "demonstrate empathy with frustrated customers" and genuinely not know what that means when a customer is yelling about a delayed shipment.
The gap is between knowing the standard and recognizing it in the moment. Call review closes this gap by showing the agent examples of the standard applied and not applied, in real conversations, with specific explanation of why each scored the way it did.
Without a structured review process, new agents learn quality standards primarily through trial and error, which is slow and expensive when each error is a real customer interaction.
What should you do when a new agent doesn't understand call quality standards?
When a new agent is struggling with quality standards, the first step is identifying whether the issue is conceptual or behavioral. A conceptual gap means the agent doesn't understand what the standard requires. A behavioral gap means they understand it but can't execute it consistently under the conditions of a live call.
Pull five to eight of the agent's recent calls and score them against the criteria where they're struggling. If scores are low on every call type, the issue is conceptual. If scores are low only on complex or high-stress calls, the issue is behavioral. Each requires a different intervention.
Step 1: Define Quality Standards as Behavioral Criteria
Before reviewing calls with new agents, translate your quality standards into observable behaviors. "Demonstrate empathy" becomes "acknowledge the customer's emotional state before moving to resolution." "Follow the process" becomes "use the correct greeting, verify the customer's identity, summarize the resolution before ending the call."
These behavioral translations are what allow you to point to a specific moment in a call and say "this is where the standard was or wasn't met." Without them, call review devolves into impressionistic feedback.
Insight7's configurable scoring system supports behavioral anchor definitions for each criterion, specifying what exemplary and deficient performance look like. This structure makes it possible to explain to a new agent exactly why a specific moment scored the way it did.
Step 2: Score the First Two Weeks of Calls
During the first two weeks of live calls, score every call for each new agent rather than sampling. New agent call volume is typically lower, making this feasible. The goal is not to penalize new agents but to identify which standards they're applying correctly and which they're missing consistently.
Insight7 automates this by processing all calls as they come in, generating scored evaluations without manual review time. Manual QA typically covers 3 to 10% of calls; automated scoring covers 100%, which matters most during onboarding when patterns appear fastest.
Look for:
- Are the same quality criteria scoring low across all calls (likely conceptual gap)?
- Are scores low only on certain call types (likely exposure gap)?
- Are scores improving week over week (trajectory is positive even if level is low)?
Step 3: Run Weekly Call Review Sessions With Evidence
Weekly call review sessions during onboarding should use actual calls from that week as the examples. Pull one call where the agent met a standard well and one where they didn't, on the same criterion. Show both.
This contrast approach is more effective than only reviewing failures. The agent sees the difference between the two calls on the same behavior and understands concretely what "good" looks like versus what they actually did.
For each low-scoring moment, ask the agent what they were thinking. This surfaces the mental model behind the behavior. If an agent skipped the empathy acknowledgment because they thought moving to resolution faster was what the customer wanted, that's a coaching conversation about when customers need to feel heard before they're ready to hear solutions.
How long does it take a new agent to reach quality standards?
Most new agents reach consistent performance on basic quality criteria within four to six weeks of live calling with structured weekly review. Complex skills like empathy under escalation or consultative questioning can take eight to twelve weeks of deliberate practice. Agents who receive weekly feedback anchored in specific call evidence consistently reach quality standards faster than those receiving periodic or general feedback.
Step 4: Assign Roleplay for the Criteria Where Scores Are Lowest
Call review identifies the gap. Roleplay builds the skill. After each weekly review session, assign a scenario targeting the specific criterion where the agent is struggling most.
Insight7's AI coaching module generates roleplay scenarios from actual call transcripts. The most challenging customer interactions from the agent's own calls become practice templates. Agents can retake scenarios until they pass the configured threshold, with scores tracked over time.
This practice-before-deployment model is especially valuable for onboarding. Agents can encounter difficult call types in a safe environment before those calls happen in production.
Step 5: Set Readiness Criteria, Not Just Onboarding Timelines
Define a readiness threshold for each call type the agent will handle independently. An agent is ready for unsupervised escalation calls when they score consistently above 75% on empathy and de-escalation criteria across at least two consecutive scoring batches.
This evidence-based readiness model replaces "they've been here 30 days" with "here's what their call data says about their current skill level." It protects customers, managers, and the agent from premature deployment.
If/Then Decision Framework
| Situation | Action |
|---|---|
| Agent scores low on all criteria consistently | Conceptual gap; review each standard with contrast call examples |
| Agent scores low on one criterion across all calls | Assign targeted roleplay for that behavior before next week's session |
| Agent scores improve then plateau | Introduce harder scenarios or different call types to continue development |
| Agent is meeting standards in practice calls but not in live calls | Check whether practice scenarios match real call conditions; adjust difficulty |
Common Mistakes in New Agent Call Review
Only reviewing failures. New agents need to see what good looks like, not just what they're doing wrong. Always pair a failure example with a success example on the same criterion.
Reviewing too many criteria at once. Focus on one or two behaviors per session. More than that and the agent loses clarity about what to practice next.
No follow-through between sessions. Assign a specific practice activity at the end of every review session and check completion at the start of the next. Without follow-through, the review session is a conversation, not a development loop.
Insight7 supports the full onboarding call review cycle, from automated scoring through to weekly trend reports and AI-generated practice assignments. See the Improve Quality Assurance workflow for how this fits into a complete QA operation.
FAQ
How often should you review calls with a new agent during onboarding?
Weekly during the first eight weeks, with additional ad-hoc reviews when alert thresholds are triggered. After eight weeks, shift to bi-weekly sessions if the agent is meeting quality standards consistently.
What's the fastest way to get a new agent to understand call quality standards?
Show them a call where the standard was met and a call where it wasn't, on the same criterion, within 24 hours of each call happening. Specificity and recency matter more than frequency. One well-structured review session per week outperforms three general feedback conversations.
