What Does ‘Good’ Look Like? Rethinking Performance Reviews in the Age of AI Co-Pilots
What Does ‘Good’ Look Like? Rethinking Performance Reviews in the Age of AI Co-Pilots
Performance reviews used to assume one simple idea: the work a person produced was largely their own.
Of course they had tools and teammates, but when you looked at an email, a report or a piece of code, it felt safe to attribute it to a single individual. Reward systems, bonus schemes and promotion criteria grew up around that assumption.
AI changes the picture. Increasingly, work is co‑created between humans and AI systems. The value lies less in typing every word yourself and more in how you frame problems, judge outputs and integrate them into real‑world decisions.
So how should performance reviews evolve?
The risk of measuring the wrong things
If your review process still focuses on sheer output-number of documents, campaigns or lines of code-you risk sending some unhelpful messages:
- People may hide their AI use, worried it will be seen as “cheating”.
- Others may churn out large volumes of AI‑generated work with minimal judgement.
- Managers are left guessing how much of the work they see is sustainable, safe and genuinely valuable.
Instead of arguing about whether AI is allowed, it is time to update what you mean by good performance.
Three dimensions of performance in an AI‑enabled world
You can still assess performance robustly. The focus simply shifts.
1. Problem framing and outcomes
What matters is not who typed the words, but whether the work solves the right problem.
Look for:
- How clearly the individual frames the task for themselves and for AI tools.
- Whether they select appropriate data and examples.
- The outcomes their work achieves for customers, colleagues or the organisation.
Good performers use AI to explore options, but they stay anchored in the result that actually matters.
2. Judgement and quality control
AI can draft, summarise and suggest. It cannot take responsibility.
So ask:
- How well does the person review and refine AI outputs?
- Do they apply relevant policies, data‑privacy rules and ethical principles?
- Can they explain the reasoning behind key decisions, even when AI contributed?
You want people who treat AI like a capable junior colleague: useful, but needing oversight.
3. Collaboration and learning
Effective AI use is a team sport. Knowledge spreads quickly when people share prompts, tips and caveats.
Strong performers:
- Help colleagues use AI safely and effectively.
- Share reusable prompts, templates and checklists.
- Reflect on what works, what does not and what needs escalation.
These behaviours compound value far beyond a single individual’s output.
Practical changes to your performance process
You do not need to redesign everything at once. Start with some targeted adjustments.
Update your review questions
Add prompts such as:
- “How have you used AI tools this year to improve your work or your team’s work?”
- “Where have you decided not to use AI, and why?”
- “What have you done to help others use AI safely and effectively?”
These questions legitimise AI use and surface good practice.
Refresh your examples of “meets” and “exceeds” expectations
Create short, concrete examples of behaviours at different performance levels-for instance:
- Meets expectations: Uses approved AI tools to draft and improve work, applies standard checks for quality and data privacy, and discusses AI use openly with their manager.
- Exceeds expectations: Proactively redesigns processes with AI, mentors others, and quantifies impact in terms of time saved, errors reduced or customer outcomes.
Train managers on how to talk about AI in reviews
Managers need confidence to:
- Ask about AI without sounding accusatory.
- Distinguish smart augmentation from over‑reliance.
- Spot cases where AI use might be masking deeper performance issues.
Short manager clinics, role‑play conversations and simple guidance can go a long way.
Guardrails that keep things fair
As AI becomes more embedded, equity matters. Some teams will have better tools or more opportunities than others.
To keep things fair:
- Ensure access to core AI tools is broadly consistent across comparable roles.
- Be clear which metrics are influenced by AI and adjust expectations accordingly.
- Monitor patterns in ratings and promotions to check AI use is not creating new bias.
The opportunity: more meaningful conversations
Handled well, AI can actually make performance conversations richer.
Instead of asking, “Did you write all of this yourself?”, managers can explore:
- “How did you decide to use AI here?”
- “What options did it present that you would not have seen otherwise?”
- “What did you learn that will change how you work next time?”
Those conversations get closer to the heart of performance: judgement, learning and impact.
You do not need a separate “AI policy for performance management”. You simply need to update your idea of what good looks like when intelligent tools are part of the team. Do that, and your people will feel confident to use AI in the open-where you can guide, support and learn from it, rather than guessing what is happening in the shadows.
Ready to Build Your AI Academy?
Transform your workforce with a structured AI learning programme tailored to your organisation. Get in touch to discuss how we can help you build capability, manage risk, and stay ahead of the curve.
Get in Touch