QA leads and contact center managers running quality assurance programs know the frustration: scores go out, agents review them, nothing changes. The problem is rarely agent motivation. It is almost always report structure. A QA report that delivers a number and a list of failed criteria tells an agent what did not work. It does not tell them what happened in the call, where it happened, or what to do differently. The agents who improve fastest receive something structurally different: the specific criterion that drove their score down, the exact moment in the transcript where it occurred, a behavior-change recommendation written in plain language, and a follow-up date. This six-step guide builds that structure into your QA reporting process.
Step 1: Structure QA Reports with Three Distinct Sections
Most QA reports are a single score summary with criterion breakdowns. That format is useful for tracking trends but insufficient for driving behavior change. Restructure every QA report around three sections.
The first section is the score summary: overall score, criterion-level scores, and comparison to team average and individual baseline. The second section is the evidence layer: for every criterion where the agent scored below threshold, a direct link or timestamp to the specific transcript moment that triggered the score. The third section is the coaching notes: for each underperforming criterion, a specific behavioral recommendation. Not "improve active listening," but "when the customer described their issue in lines 4 through 7, you began your solution before they finished. In the next call, hold your response until the customer completes their thought and confirm you understood before offering a resolution."
Insight7 links every criterion score to the exact transcript quote that triggered it, giving managers the evidence layer for coaching notes automatically.
Step 2: Write Coaching Notes in Behavioral Terms
The most common reason coaching notes do not change behavior is that they are written in evaluative rather than behavioral language. "Lacked empathy" is an evaluation. "Did not acknowledge the customer's frustration before moving to the resolution script" is a behavior. The distinction is not semantic. Agents cannot change an evaluation. They can change a behavior.
Every coaching note should follow this structure: describe the situation where the behavior occurred, describe what the agent did, describe what a different behavior would look like in the same situation. That three-part structure is portable across any criterion and any call type.
Avoid this common mistake: Writing coaching notes as performance commentary directed at the manager's record-keeping rather than at the agent's next call. Phrases like "agent continues to struggle with objection handling" are documentation, not coaching. Coaching notes must be written in second person, addressed to the agent, describing specific actions they can take before their next evaluated call.
What should a QA coaching note include to actually change agent behavior?
A coaching note that changes behavior includes four elements. First, the criterion: name the specific dimension where the behavior gap occurred so the agent knows exactly what is being addressed. Second, the evidence: reference the exact moment in the call, by timestamp or transcript line, where the gap appeared. Third, the behavior change: state in concrete, actionable terms what to do differently. Fourth, the practice path: if a role-play scenario or training module exists that directly addresses the criterion, name it and assign it. SQM Group research on coaching effectiveness in contact centers identifies that feedback specificity, particularly evidence-linked behavioral recommendations, is the strongest predictor of whether agents implement feedback between evaluations. Generic feedback, even when delivered by skilled coaches, produces significantly lower improvement rates than specific, evidence-grounded notes.
Step 3: Link Every Coaching Note to the Specific Transcript Quote That Triggered It
Coaching without evidence is opinion. Agents who receive a 65% on "objection handling" and a note to improve have no way to know whether the evaluator's judgment is accurate, what kind of lapse was scored, or what they were actually doing that produced the score.
Linking the coaching note to the specific transcript excerpt changes the conversation. The agent can review the moment in context. That self-confrontation is more persuasive than any amount of managerial instruction. Instead of defending a score, the manager facilitates a review of evidence: "Let's look at this moment together. What do you think happened here?"
Insight7 provides evidence-backed scoring where every criterion links back to the exact quote and location in the transcript, eliminating the ambiguity that makes agents resistant to QA feedback.
Step 4: Set a Follow-Up Date in Every Report
A QA report without a follow-up date is a report without accountability. The follow-up date creates a time horizon for the behavior change and signals that the coaching note is an active development commitment, not administrative paperwork.
Tie the follow-up to the next evaluated call review, not a calendar date. "Review criterion scores on your next five evaluated calls, in review on the 15th" connects accountability to the data. Build it into the report template as a required field: optional fields get skipped.
How do you track whether QA feedback led to performance improvement?
The measurement framework is straightforward: compare criterion-level scores on the 10 calls evaluated before the coaching note was delivered against the 10 calls evaluated after. An improvement of five or more points on the specific criterion targeted is a reasonable threshold for concluding the coaching was effective. No movement on the targeted criterion, combined with movement on other criteria, suggests the coaching note was not specific enough to address the right behavior. No movement anywhere suggests either a coaching quality problem or a systemic issue that individual coaching cannot resolve. Insight7's score tracking over time shows improvement trajectory per agent per criterion, making pre/post comparisons straightforward without requiring manual data compilation.
Step 5: Use Report Data to Identify When Manager Coaching Is Not Moving Scores
If an agent completes multiple coaching cycles with no criterion-level improvement, diagnose in order: are coaching notes meeting the behavioral specificity standard from step two? Is the manager following up at the rate required in step four? Does the agent have a practice deficit rather than a knowledge gap?
When multiple agents coached by the same manager fail to improve on the same criteria, the issue is coaching quality, not agent performance. One agent not improving could be an agent issue. Three agents not improving on the same criterion under the same manager is a coaching methodology problem.
Insight7 aggregates criterion scores across all agents on a team, making this pattern analysis visible at the manager level and grounding the coaching quality conversation in data rather than perception.
| Report Section | Purpose | Without It |
|---|---|---|
| Score summary | Tracks trends, feeds performance management | Usually present |
| Evidence layer | Makes coaching credible and specific | Usually missing |
| Coaching note | Drives behavior change | Usually generic or absent |
Step 6: Aggregate Coaching Notes Monthly to Identify Systemic vs. Individual Patterns
Individual coaching notes solve individual problems. Monthly aggregation of coaching notes across all agents reveals whether the problems are individual or systemic. If 70% of coaching notes written in a given month reference the same criterion, that criterion either represents a program-level training gap or a process design issue that individual coaching cannot resolve.
Build a monthly coaching note audit into your QA calendar. Export all coaching notes from the month, categorize by criterion, and calculate the frequency. Any criterion appearing in more than 30% of notes is a systemic issue requiring a program-level response, not more individual coaching. Bring the aggregated data to your training and program design team with a specific recommendation for what content or process change would address the pattern.
FAQ
How many QA reports should an agent receive per month to drive meaningful improvement?
ICMI guidance suggests a minimum of two to four formal QA feedback sessions per month for agents in active development. Frequency matters less than specificity: two evidence-linked reports with accountability follow-up outperform six generic score summaries.
What is the right length for a coaching note in a QA report?
Coaching notes should be concise enough to read in under two minutes and specific enough to act on immediately. Three to five sentences per criterion is usually sufficient. Longer coaching notes often indicate the evaluator is documenting their own analysis rather than communicating a clear behavior change to the agent. If you find yourself writing a paragraph-length coaching note, check whether you have identified one specific behavior change or are combining multiple issues.
Should agents review their QA reports with their managers, or receive them independently?
Joint review consistently outperforms independent delivery. The conversation that happens when an agent hears their own call alongside a manager produces faster behavioral change than asynchronous document review. Reserve independent delivery for routine reports where scores are strong and no change is needed.
