
The 12 Cognitive Biases Destroying Your Performance Reviews
A manager sits down to write a performance review. They genuinely want to be fair. They've worked with this employee for a year. They think they have a clear picture of their performance.
They're almost certainly wrong.
Research from organizational psychology shows that typical performance ratings are more noise than signal—as much as 62% of the variance in ratings reflects the rater's biases, not the employee's actual performance.
Here's what's happening in your brain during reviews, and how to fight back.
Bias #1: Recency Effect
What it is: Overweighting recent events while forgetting earlier performance.
The science: Human memory doesn't work like a video recording. We remember the last few frames vividly while earlier footage fades. Neurologically, recent memories are still in active consolidation and are easier to retrieve.
How it shows up: An employee performs excellently for 11 months, then has one rough month before reviews. That rough month dominates the evaluation—even though it represents less than 10% of the review period.
The fix: Keep a running log throughout the year. Weekly notes—even bullet points—provide an objective record that balances recency with the full picture. Better yet, use a system that captures feedback in real-time so you're not relying on memory at all.
Bias #2: Primacy Effect
What it is: First impressions create a mental anchor that's hard to update.
The science: When we form initial impressions, we create a cognitive schema—a mental model of the person. New information gets filtered through that schema. Confirming evidence strengthens it; contradicting evidence gets minimized.
How it shows up: An employee who had a rocky first month (maybe during onboarding) struggles to shake that early impression, even after a year of solid performance. Conversely, someone who started strong gets the benefit of the doubt for later missteps.
The fix: Explicitly review performance by quarter or project, not holistically. This forces you to evaluate each period on its own terms rather than through the lens of early impressions.
Bias #3: Halo Effect
What it is: One positive trait creates a "halo" that elevates ratings across all dimensions.
The science: Our brains seek cognitive consistency. If we perceive someone as "good" in one area, we experience dissonance rating them poorly in others. To resolve the dissonance, we unconsciously inflate related ratings.
How it shows up: An employee who is exceptionally articulate in presentations gets rated highly on technical skills, collaboration, and strategic thinking—even when there's limited evidence in those areas. Their communication halo casts a flattering light on everything else.
The reverse (Horn Effect): One negative trait drags down all ratings. An employee who struggles with punctuality gets lower marks on quality of work and teamwork, even though those are unrelated.
The fix: Rate each competency separately, with specific evidence required for each. Complete all ratings for one competency across all employees before moving to the next. This pattern-interruption reduces cross-contamination.
Bias #4: Similarity Bias
What it is: Rating people who are like us more favorably.
The science: We're more comfortable with people who share our background, communication style, or interests. This comfort translates to perceived competence. We also have more context for understanding their behavior because we recognize our own patterns in them.
How it shows up: A manager who came up through engineering rates technical contributors more favorably than salespeople, even in a role where sales skills matter more. Or a manager who works late naturally views employees who stay late as more committed—regardless of productivity.
The fix: Build diverse evaluation panels. When multiple perspectives contribute to ratings, individual biases average out. Also, calibration sessions where managers discuss ratings with peers help surface where similarity bias might be at play.
Bias #5: Contrast Effect
What it is: Evaluating someone based on how they compare to the previous person reviewed, rather than against objective standards.
The science: Our brains evolved to detect differences, not absolutes. When you see three shades of gray, you can easily tell which is lightest and darkest—but you can't objectively measure any of them.
How it shows up: If you review a poor performer before a mediocre one, the mediocre employee looks great by comparison. If you review a star performer first, everyone after looks worse. The sequence of reviews changes the ratings, even though performance didn't change.
The fix: Define rating criteria before reviewing anyone. What does "exceeds expectations" actually look like? Document concrete behaviors and outcomes for each rating level. Then evaluate against those standards, not against the previous review.
Bias #6: Central Tendency Bias
What it is: Rating everyone as average to avoid extremes.
The science: Extreme ratings feel risky. Rating someone as exceptional creates expectations (and potential disappointment). Rating someone as poor feels like passing judgment. The middle ground feels safe and defensible.
How it shows up: On a 5-point scale, almost everyone gets a 3. The distribution is a spike in the center, not a bell curve. High performers aren't recognized; low performers aren't identified; the rating system becomes meaningless.
The fix: Force a distribution. Not stack ranking (which has its own problems), but requiring managers to use the full scale. If every rating is 3, the rating system provides no information. Some organizations require written justification for "meets expectations" ratings, making the default path less convenient.
Bias #7: Leniency/Severity Bias
What it is: Some managers rate everyone high; others rate everyone low.
The science: This reflects the rater's personality more than the team's performance. Lenient raters avoid conflict and want to be liked. Severe raters have high standards and believe tough grading motivates improvement.
How it shows up: Two employees with identical performance get wildly different ratings based on who their manager is. This destroys the fairness of compensation and promotion decisions.
The fix: Calibration is essential. Gather managers together to compare ratings across teams. When Manager A's "exceeds expectations" looks like Manager B's "meets expectations," the inconsistency becomes visible. Calibration sessions also create social pressure for accuracy.
Bias #8: Attribution Error
What it is: Attributing outcomes to the individual rather than circumstances.
The science: This is the fundamental attribution error—we overestimate personal factors and underestimate situational ones when explaining behavior. When something goes wrong for others, we blame them. When it goes wrong for us, we blame circumstances.
How it shows up: An employee misses targets during a quarter when their main client went bankrupt. The manager rates them poorly for "missing goals," ignoring that the circumstance was beyond their control. Conversely, an employee hits targets during a booming market and gets rated as exceptional, when market conditions did most of the work.
The fix: Separate inputs from outcomes. Evaluate the quality of decisions and effort, not just results. An employee who made excellent decisions but got unlucky deserves recognition. An employee who got lucky despite poor decisions shouldn't be overrated.
Bias #9: Confirmation Bias
What it is: Seeking and remembering information that confirms existing beliefs.
The science: Once we have a belief about someone, we unconsciously filter information. We notice and remember examples that confirm the belief. We dismiss or forget examples that contradict it.
How it shows up: A manager believes "John isn't leadership material." Throughout the year, they notice every time John hesitates in a meeting (confirmation) and overlook instances where John successfully led a project (contradiction). At review time, the manager's belief feels validated—but the evidence is selectively curated.
The fix: Seek disconfirming evidence actively. Before writing a review, ask: "What evidence would contradict my overall impression?" Look specifically for that evidence. Include peer feedback and 360-degree input to get perspectives beyond your own filtered view.
Bias #10: Anchoring Bias
What it is: Being overly influenced by the first piece of information encountered.
The science: Initial information creates an "anchor," and subsequent adjustments from that anchor are typically insufficient. Even arbitrary anchors influence judgment—show someone a high random number, and their subsequent estimates will be higher.
How it shows up: A manager looks at last year's rating before writing this year's review. That prior rating becomes an anchor. Even if performance changed dramatically, this year's rating hovers near last year's. Or a manager sees the employee's self-review first, and that self-assessment anchors their own rating.
The fix: Write your rating before looking at prior reviews or self-assessments. Document your evidence and conclusions independently. Only then compare to other inputs.
Bias #11: Availability Bias
What it is: Overweighting information that comes to mind easily.
The science: If we can easily recall examples of something, we assume it's common or important. Dramatic or emotional events are more "available" in memory than routine events—even if the routine events were more significant overall.
How it shows up: The big project failure in August looms large because it was stressful and memorable. The steady, reliable work throughout the rest of the year barely registers because it was… unremarkable. The available memory dominates the evaluation.
The fix: Use objective records: project outcomes, metrics, peer feedback collected throughout the year. Don't rely on what you can recall—rely on what was documented. Regular check-ins that are recorded create a more complete picture.
Bias #12: Affect Heuristic
What it is: Letting your general feelings about someone influence specific judgments.
The science: When we like someone, we perceive lower risks and higher benefits in their actions. When we dislike someone, the reverse happens. Emotions serve as a shortcut for evaluation.
How it shows up: An employee is personable and fun to work with. The manager genuinely enjoys their 1:1s. That positive affect colors the performance review—the manager unconsciously inflates ratings because it feels wrong to rate someone they like poorly. (The reverse happens for employees who rub the manager the wrong way, even if their work is solid.)
The fix: Separate the relationship from the performance. Ask: "If this same work was produced by someone I'd never met, how would I rate it?" Focus on documented outcomes and behaviors, not feelings about the person.
Building Bias-Resistant Systems
Individual awareness isn't enough. You can know all twelve biases and still fall prey to them—that's how unconscious bias works. The solution is building systems that counteract bias structurally.
Continuous Documentation
Annual reviews fail partly because they rely on 12 months of memory. Systems that capture feedback, accomplishments, and observations in real-time create an objective record that counters recency, availability, and primacy biases.
Multi-Source Feedback
360-degree feedback dilutes individual biases. When multiple perspectives contribute—peers, direct reports, cross-functional partners—no single person's biases dominate. Patterns that appear across sources are likely real; patterns from one source may reflect that rater's bias.
Structured Evaluation Criteria
Vague criteria invite bias. "Exceeds expectations" means different things to different people. Specific, behavioral criteria—"Delivered three projects on time and within budget"—leave less room for subjective interpretation.
Calibration Sessions
When managers discuss ratings together, inconsistencies surface. "You rated Sarah a 5 on communication—tell me about that" forces articulation of evidence. Peers can challenge interpretations and identify where bias might be operating.
Separation of Competencies
Rating each competency in isolation, with specific evidence required, reduces halo/horn effects. Rating all employees on one competency before moving to the next prevents contrast effects.
Training With Practice
Bias training alone doesn't reduce bias—but training combined with practice and feedback does. Give managers scenarios to rate, then show them where their biases appeared. This builds calibration over time.
The Path Forward
Perfect objectivity is impossible. We're human; we have biases. The goal isn't eliminating bias—it's building systems that minimize its impact on decisions that affect people's careers and livelihoods.
Every organization can start somewhere:
- Document throughout the year instead of relying on memory at review time
- Collect multiple perspectives through peer and 360 feedback
- Calibrate ratings across managers to surface inconsistencies
- Define specific criteria so "exceeds expectations" means the same thing to everyone
- Train and practice to build awareness into habit
The science is clear on what goes wrong in performance reviews. Now it's a matter of building systems that help us get it right.

