We’ve all done it.
We’ve all had a stakeholder ask us for the latest stats on our learning programmes, and we’ve spent hours refining our slideshows and reports to show off all our hard work.
What’s in those slides?
Platform signups. Course completions. Workshop registrations.
On a good day, we might include learner satisfaction scores.
So what does it all mean?
Well… not all that much, honestly. Let’s talk about vanity metrics, and why they’re not as valuable as they might initially look.
Vanity metrics are data points that make a programme look successful on paper, but that don’t necessarily reflect meaningful learning, behaviour change or business impact. In other words, they may give us a surface-level indication of how many people have engaged in the learning, but they tell us nothing about whether – or how – that learning has been applied.
Examples of vanity metrics in L&D include:
Put simply, vanity metrics are much easier to measure than the application of learning or real behaviour change. Any member of the L&D team can log into the LMS and pull stats around logins, time spent learning or course completions, but it’s a lot harder to measure the business impact of learning.
But it’s not just the L&D team who typically relies on vanity metrics. Stakeholders are often unclear about what results to expect from the L&D team – if they’re used to reports containing usage rates, this is what they will always ask for.
Vanity metrics in L&D are unhelpful for two key reasons:
While it can be useful to keep an eye on vanity metrics – for instance, to check that employees are engaging with a new programme or that content isn’t getting lost on the LMS – it’s crucial that they’re considered in the context of the wider business.
Just 56% of businesses can measure the business impact of their learning programmes.
- Watershed, 2024
It’s a joint effort to shift mindsets from relying on the ‘easy’ measurements (like logins and learning sessions) to the measurements that reveal real business impact and ROI – we’re talking things like cost savings, increased customer loyalty, increased productivity, time savings, improved brand perception and more.
In fact, this overreliance on surface-level numbers can be actively harmful to L&D strategy. When we reward teams for increasing completions or sign-ups, we risk creating a tick-box culture of learning – where the goal becomes participation, not performance. Learners may rush through content to 'complete' it, without reflecting, practising or embedding new skills. Worse still, we may continue to invest in initiatives that look good in dashboards (or on those all-important stakeholder slides), but fail to solve real business problems.
That means we need to get comfortable with 'messier', more qualitative measures. Not all measures of progress and performance can be neatly quantified, and that's why it's time for us to go beyond the end-of-module quiz to find the real story.
OK, so we know there’s a problem with L&D vanity metrics – so what should we be tracking?
Other L&D metrics worth measuring to understand real ROI and business impact include:
By focusing on metrics that link learning to performance and business outcomes, L&D teams can prove the real value of their learning initiatives instead of just reporting surface-level success.
And AI gives us even more juicy data to explore. 5app CEO Philip Huthwaite has already written about how we intend to measure the success of our own AI initiatives, and that’s just the start. With AI comes a wealth of new information and potential insights, including trending topics, knowledge gaps and the ability to measure the real-world impact of learning… and what we’re working on right now is set to take it all to the next level.