Performance management

The problem with vanity metrics in L&D

L&D vanity metrics
The problem with vanity metrics in L&D
4:57

We’ve all done it. 

We’ve all had a stakeholder ask us for the latest stats on our learning programmes, and we’ve spent hours refining our slideshows and reports to show off all our hard work.

What’s in those slides?

Platform signups. Course completions. Workshop registrations.

On a good day, we might include learner satisfaction scores. 

So what does it all mean?

Well… not all that much, honestly. Let’s talk about vanity metrics, and why they’re not as valuable as they might initially look.

 

What are vanity metrics?

Vanity metrics are data points that make a programme look successful on paper, but that don’t necessarily reflect meaningful learning, behaviour change or business impact. In other words, they may give us a surface-level indication of how many people have engaged in the learning, but they tell us nothing about whether – or how – that learning has been applied.

Examples of vanity metrics in L&D include:

  • Course completion rates – Just because 90% of learners have finished a course, how can you be sure they’ve retained or applied the knowledge or skills?
  • Number of training hours – Spending 500 hours on training across the business in a month may seem impressive, but how can you know that it’s actually led to skill or performance improvements?
  • Number of learners – Does a 50% increase in LMS signups or 200 workshop attendees necessarily lead to a big increase in performance?

Why is L&D so stuck on vanity metrics?

Put simply, vanity metrics are much easier to measure than the application of learning or real behaviour change. Any member of the L&D team can log into the LMS and pull stats around logins, time spent learning or course completions, but it’s a lot harder to measure the business impact of learning.

But it’s not just the L&D team who typically relies on vanity metrics. Stakeholders are often unclear about what results to expect from the L&D team – if they’re used to reports containing usage rates, this is what they will always ask for. 

 

Why are vanity metrics so unhelpful?

vanity

Vanity metrics in L&D are unhelpful for two key reasons:

  • They lead us to focus on the wrong things
    Just because vanity metrics are the easy way to say we’re measuring our learning programmes, they’re almost never the best metrics. If we know we’re going to be reporting on the number of LMS users, we’ll focus on driving more signups, which doesn’t necessarily translate to increased learning ROI.
  • They don’t allow us to improve our learning programmes
    To an extent, vanity metrics are irrelevant. A course with 1,000 learners may look more impressive than one with 100 learners on paper, but the 100-person course may be the one responsible for improving sales, while the 1,000-person course has little business impact. If we’re judging our learning on vanity metrics, we can’t know what is really moving the needle to help us improve our programmes over time.

While it can be useful to keep an eye on vanity metrics – for instance, to check that employees are engaging with a new programme or that content isn’t getting lost on the LMS – it’s crucial that they’re considered in the context of the wider business. 

Just 56% of businesses can measure the business impact of their learning programmes.
- Watershed, 2024

It’s a joint effort to shift mindsets from relying on the ‘easy’ measurements (like logins and learning sessions) to the measurements that reveal real business impact and ROI – we’re talking things like cost savings, increased customer loyalty, increased productivity, time savings, improved brand perception and more.

In fact, this overreliance on surface-level numbers can be actively harmful to L&D strategy. When we reward teams for increasing completions or sign-ups, we risk creating a tick-box culture of learning – where the goal becomes participation, not performance. Learners may rush through content to 'complete' it, without reflecting, practising or embedding new skills. Worse still, we may continue to invest in initiatives that look good in dashboards (or on those all-important stakeholder slides), but fail to solve real business problems.

That means we need to get comfortable with 'messier', more qualitative measures. Not all measures of progress and performance can be neatly quantified, and that's why it's time for us to go beyond the end-of-module quiz to find the real story.

 

What’s the alternative to L&D vanity metrics?

OK, so we know there’s a problem with L&D vanity metrics – so what should we be tracking?

Other L&D metrics worth measuring to understand real ROI and business impact include:

  • Knowledge retention and skills application (e.g. post-training assessments and on-the-job application)
  • Behaviour change (e.g. performance reviews, 360-degree feedback and manager observations)
  • Impact on performance (e.g. productivity, quality of work, sales increase, customer feedback)
  • Business outcomes (e.g. reduced errors, improved customer satisfaction, increased revenue)

By focusing on metrics that link learning to performance and business outcomes, L&D teams can prove the real value of their learning initiatives instead of just reporting surface-level success.

And AI gives us even more juicy data to explore. 5app CEO Philip Huthwaite has already written about how we intend to measure the success of our own AI initiatives, and that’s just the start. With AI comes a wealth of new information and potential insights, including trending topics, knowledge gaps and the ability to measure the real-world impact of learning… and what we’re working on right now is set to take it all to the next level.

 

Ready to ditch the L&D vanity metrics?

We don’t blame you! We’re currently working on a solution that’s set to change the way businesses measure the impact of their learning for good. We’ll be sharing more about it in the coming weeks, so subscribe to Take 5 to be the first to hear what we’ve got planned…

SIGN ME UP

Similar posts