Measuring AI Fluency: Moving Beyond "Did They Take the Course?"

ai-fluency ai-literacy measurement talent

Measuring AI Fluency: Moving Beyond “Did They Take the Course?”

Ask most organisations how their AI capability is doing, and the answer is a completion rate. So many people have done the training, so many hold the certificate. It is a clean number, and it tells you almost nothing about whether anyone can actually work with AI. Course completion measures attendance, not capability - and the gap between the two is where AI initiatives quietly underperform.

This article looks at how to measure AI fluency in a way that reflects real capability.

Why completion rates mislead

Completion rates are popular because they are easy to collect and easy to report. But they answer the wrong question. They tell you who sat through the training, not who can apply it. Someone can complete every course and still avoid AI in their actual work, use it badly, or accept its outputs without judgement. And someone can skip the formal training entirely and be genuinely fluent because they learned by doing.

A completion rate optimised for looks good on a slide and tells leaders nothing they can act on.

What AI fluency actually is

To measure fluency, you have to define it. AI fluency is not knowing facts about AI. It is a set of practical capabilities. Knowing when AI helps with a task and when it does not. Being able to direct AI well to get useful output. Being able to evaluate that output - spotting where it is wrong, weak or missing context. Knowing where human judgement must be applied and applying it. And being able to adapt as the tools change. Fluency is demonstrated in work, not in a test score.

Measuring it well

Because fluency shows up in work, that is where to measure it. Look at whether people are actually applying AI in their real work, and how well - not whether they could in principle. Use demonstrations of capability: have people show how they would approach a realistic task with AI, and assess the judgement they apply. Gather observed evidence from managers and peers about who works effectively with AI and who does not. Track whether capability is deepening over time or sitting flat. And ask people directly about their confidence and where they struggle, because self-reported difficulty points you to where support is needed.

None of these is as clean as a completion rate. All of them tell you something a completion rate cannot.

Avoiding the new measurement traps

Better measurement brings its own risks. If you measure AI usage volume, people will use AI more, including where they should not - volume is not fluency. If you make fluency assessment high-stakes, people will optimise for the assessment rather than the capability. And if you measure fluency once and file it, you will miss that fluency decays as tools change. Good measurement is ongoing, focused on real application, and used to guide support rather than to rank people.

From measurement to action

The point of measuring fluency is to act on it. Measurement should tell you where capability is weak so you can direct support there, who is fluent so they can help others, where fluency is decaying so you can refresh it, and whether your capability-building is actually working. If a fluency measure does not change what you do, it is just a different vanity metric.

What leaders should do

If you are responsible for AI capability, stop reporting completion rates as if they were capability. Define what AI fluency actually means for your roles, measure it where it shows up - in real work - and use the results to direct support rather than to rank people. Treat it as ongoing, because fluency decays. And make sure every fluency measure you keep is one you would actually act on.

The bottom line

Course completion measures attendance, not capability, and the gap between the two is where AI initiatives underperform. Real AI fluency is a set of practical capabilities demonstrated in work - knowing when AI helps, directing it well, evaluating its output, applying judgement, adapting as tools change. Measure it where it shows up, keep measuring it as it decays, and use the results to act. Organisations that measure fluency honestly will know where they actually stand. Those that report completion rates will keep mistaking attendance for capability.

Ready to Build Your AI Academy?

Transform your workforce with a structured AI learning programme tailored to your organisation. Get in touch to discuss how we can help you build capability, manage risk, and stay ahead of the curve.

Get in Touch