Sales managers who keep coaching logs without connecting them to performance review data are tracking activity, not impact. This six-step process shows you how to build a coaching log that ties every session to measurable performance outcomes, so your next review cycle has evidence instead of impressions. The key shift: stop logging what you discussed and start logging what changed.
What You Need Before Step 1
Gather these before starting: access to your QA platform or call scoring system for the last 90 days, your current performance review criteria (win rate, ramp time, or quota attainment), and a spreadsheet or template where you can record structured session data. You also need 30 minutes to define which metrics you will track before the first session.
Step 1: Map Coaching Activity to Performance Outcomes
Start by listing the three to five performance outcomes you measure in formal reviews: win rate, criterion score delta, ramp time, quota attainment, or average handle time. For each outcome, identify the specific call behavior that drives it. Win rate connects to objection handling. Ramp time connects to script adherence in the first 30 days. Criterion score delta is the most direct: it measures whether a coached behavior improved after the session.
Every log entry must tie to one of these outcomes. If a coaching topic cannot connect to a measurable outcome, it does not belong in the log used for performance reviews.
Common mistake: Logging everything discussed in a session. Broad notes ("covered tone and pacing") produce unstructured data that cannot be compared across reps or reviewed at scale. Narrow each entry to one behavior and one outcome metric.
Step 2: Record the Four Required Fields Per Session
Each log entry needs exactly four fields: session date, criterion targeted, score before the session, and score after the next evaluated call. This structure makes the log machine-readable by a QA platform and comparable across managers.
A complete entry looks like: April 2, 2026 | Objection handling | 58% | 71%. An incomplete entry looks like: "Worked on objections, seemed better." The first entry supports a performance review. The second does not.
Decision point: Choose between logging per session or per criterion. Per-session logging creates one entry per coaching conversation. Per-criterion logging creates one entry per behavior targeted, even if multiple behaviors were addressed in one session. For performance reviews, per-criterion logging is more useful because it shows improvement trajectories on specific behaviors over time.
Step 3: Track Template Completion Rate as Manager Accountability
The coaching log is also a record of manager behavior, not just rep behavior. Track how many of your scheduled sessions produced complete log entries (all four fields filled). Target 90% completion rate over any 30-day period.
Incomplete logs signal one of two problems: sessions were skipped, or sessions happened without a targeted criterion. Both undermine the coaching program's credibility in a performance review. A manager with 60% completion rate cannot credibly claim they coached a rep through a performance issue.
Common mistake: Treating the log as documentation only. The completion rate is a leading indicator of whether your coaching program is structured or improvised.
Step 4: Connect the Log to Your QA Platform for Auto-Population
Manual entry creates lag and error. If your QA platform scores calls against named criteria, configure it to export criterion scores directly into your coaching log template. This eliminates the "score before" and "score after" fields as manual entries and makes the log a real-time record.
Insight7 scores every call automatically against your defined criteria and links each score to the transcript evidence. When you run a coaching session on objection handling and the rep's next five calls are scored, the criterion delta populates without manual retrieval. Sales managers using this approach spend time on coaching decisions, not data collection.
How Insight7 handles this step: Insight7's QA engine applies your weighted criteria to 100% of calls and generates per-rep scorecards showing dimension-level trends. A sales manager can open the platform, see that a rep's objection handling score moved from 58% to 71% after a coaching session, and link that entry directly to the coaching log. See how this works: Insight7 for Sales, CX and Learning
Step 5: Use 90-Day Log Data in Formal Performance Reviews
A 90-day log window gives you enough data to distinguish a trend from a one-call improvement. In a performance review, present the criterion score trajectory: where the rep started, which sessions targeted which behaviors, and where scores landed. This is a leading indicator analysis, not a trailing one.
The review conversation changes when you have log data. Instead of "you need to work on objections," you can say: "Your objection handling score was 52% in January. We ran three sessions targeting this in February. Your March average is 69%. The remaining gap is in price objection specifically, not in objection handling overall."
Decision point: Not every criterion in your QA rubric belongs in the performance review. Focus the review on the two to three criteria with the highest weight in your scoring system. These are the behaviors that most directly drive win rate, resolution, or ramp time.
Step 6: Distinguish the Coaching Log From the Performance Review
The coaching log is a leading indicator. The performance review is a lagging indicator. Conflating them produces reviews that punish short-term scores rather than recognize behavioral effort and trajectory.
A rep whose criterion scores dropped in week one of a new behavior, then recovered and surpassed baseline by week eight, is demonstrating exactly what good coaching looks like. The log shows the dip and the recovery. The performance review should reflect the trajectory, not the lowest point.
Track two numbers separately in every review: current criterion score (lagging) and criterion score delta from baseline (leading). The delta is the coaching signal. The current score is the performance signal. Both matter, and they tell different stories.
Common mistake: Using the coaching log as a disciplinary record rather than a development record. If the first time a rep sees their log data is in a negative performance review, the log is being used incorrectly. Share log data with reps after every session so the review contains no surprises.
What Good Looks Like After 90 Days
After three months of structured logging, a sales manager should see: criterion score deltas of 10 to 20 percentage points on the behaviors directly targeted in coaching sessions, template completion rates above 85% for structured managers, and performance reviews that take under 30 minutes because the evidence is already documented. Quality assurance data tied to coaching logs reduces review preparation time by removing the need to reconstruct a narrative from memory.
How do you track coaching effectiveness in a sales log?
Track coaching effectiveness by recording criterion scores before and after each session, then measuring the delta over 30 to 90 days. A criterion score that improves 10 to 15 percentage points after three targeted sessions indicates effective coaching. Logs that record only topics discussed, without pre/post scores, cannot measure effectiveness.
What metrics should a sales coaching log include?
A sales coaching log needs: session date, criterion targeted, score before the session, score after the next evaluated call, and whether the follow-up score was measured. Optional fields include call evidence (transcript timestamp), rep self-assessment, and manager notes on session quality. Pricing and quota data belong in the performance review, not the log.
What is the difference between a coaching log and a performance review?
A coaching log records leading indicators: specific behaviors targeted, scores before and after sessions, and improvement trajectories. A performance review records lagging indicators: quota attainment, win rate, and net revenue. Effective managers use 90 days of coaching log data to explain the lagging indicators in the performance review, connecting behavior change to business outcome.
How often should coaching logs be reviewed?
Review individual log entries weekly with the rep so scores are never a surprise. Review the full 90-day log quarterly as input to formal performance reviews. Update your criterion list semi-annually if your QA rubric changes, and backfill entries to maintain historical continuity.
Sales manager building a coaching log for 10 or more reps? See how Insight7 connects QA scores to coaching sessions automatically. See it in 20 minutes.
