Performance management

The good, the bad and the ugly of learning impact measurement

The good, the bad and the ugly of learning measurement
The good, the bad and the ugly of learning impact measurement
6:16

We’ve spent the last few weeks exploring all things learning impact, measurement and ROI… and while we’ve seen that it doesn’t always work quite how we’d like it to, sometimes it all goes to plan!

Our CEO Philip Huthwaite has packed out his calendar with 50+ L&D leader interviews, digging into what’s working well and where there’s room for improvement in the way businesses are tackling the thorny issue of learning impact in 2025.

Unsurprisingly, there’s a real variety of experiences. On balance, there are more frustrations than victories – but we’re a positive bunch here at 5app, so let’s dig into the good, the bad and the ugly of learning impact measurement.

 

The good

One thing L&D leaders see working well right now is the anecdotal evidence they can collect from self-reported data. A lot of L&D reporting is about crafting a narrative, and having real stories (such as better serving the needs of customers or being equipped with the skills to make better business decisions) can help support the cold, hard data. 

What also works well is when learning data is viewed alongside business metrics, such as sales or call behaviour. Whether it’s correlation or direct causation, there’s something extremely satisfying about seeing the business data improving alongside a new learning initiative.

Tying learning to business data (such as customer service KPIs) is better than relying solely on "feelings."

We’re also getting better at real-world skills measurement – particularly for hard skills like programming. Scenario-based learning gives us a direct insight into how people are applying learning to real tasks, and real-world performance metrics show how learning is translating into the workplace. For example, ‘dummy phishing campaigns’, where the business sends out a fake phishing email to test employees’ phishing detection skills, helps L&D know how well learners have absorbed their training, and shows where there might still be blind spots.

 

The bad

Disapproval

Something we heard time and time again was the frustration with those pesky vanity metrics. L&D leaders believe that businesses rely too heavily on basic completion and login data, which doesn’t tell us anything about how the learning has been understood or applied.

“There’s a real lack of sustained interest in outcomes beyond completion.”

A common worry among the L&D leaders we spoke to was around asking for too much feedback. When we think about improving our learning measurement, our first port of call will often involve sending out a survey to see how learners are getting on – but requesting feedback too frequently can lead to ‘feedback fatigue’, where response rates decline over time.

“Over-enforcement of surveys can reduce engagement.”

Several leaders also flagged that there can be a lot of bias in our current methods. Whether that’s learners overestimating their progress in self-reports or superficial questions in engagement surveys, it can sometimes feel like we’re cherry picking the information we want, or not asking the right questions to get the right data.

Another interesting angle was the lack of ‘intangible’ benefits measured by traditional methods. When we’re so focused on numbers, we can miss the human, emotional side of learning, which is actually a huge piece of the puzzle. Do people feel more confident in their roles? Do they feel more secure at work? Do they feel motivated, engaged and appreciated?

“Learner stories are powerful. We should be using real experiences to influence engagement and buy-in.”

Finally, a practical concern: we can’t ignore that the cost of meaningful learning impact measurement can be prohibitive for many L&D teams. Methods like longitudinal focus groups can be expensive, as can repeat assessments over time. L&D teams at large organisations may have the right budget to do this, but can we really say the same for one- or two-person teams at small businesses?

 

The ugly

One thing L&D leaders really don’t enjoy is cleaning up fragmented, legacy data from previous programmes – and who can blame them? When data is wildly inconsistent from one programme to the next, or hasn’t been analysed, or was never collected in the first place, it’s impossible to properly measure improvements over time. We can improve things for our future learning initiatives, but we don’t have control over legacy data.

Another ugly truth is that learning impact measurement is complex, and is sometimes undervalued by senior leaders. It’s treated differently to sales figures or marketing leads, as it can take a while to reap the rewards of your L&D spend. We need the support of senior leaders to secure a thriving learning culture (and that all-important budget), and getting them on board with our approach to learning impact measurement is vital.

“If your senior leaders are asking you to prove ROI, you’ve already lost.”

But L&D’s dirty little secret is that too often, we’re petrified of failure – or at the very least, finding out we haven’t succeeded. If we don’t measure, we can’t know that we’ve failed, so nobody can tell us we’re doing a bad job. But on the flipside, if we don’t measure, how can we prove that what we’re doing is working? Facing that fear head-on is scary, but the only way forward if we’re finally going to prove our value to the rest of the business.

“L&D’s fear of failure prevents proper experimentation.”

The result?

“Weak ROI tracking means L&D budgets don’t reflect the actual impact of what we’re doing.”

 

So where does that leave us?

If this were a school report card, L&D would get an A+ for effort, but there’s room for improvement when it comes to ‘getting the grades’. 

We have the right knowledge, intentions and motivations – now we need to be empowered with the right technology to collect, measure and analyse learning data. But it can’t just be any old tech – it needs to fit in with our existing tech stacks (which are becoming increasingly fragmented and complex) and work with a wide range of L&D budgets.

Sounds like a pipe dream, right? Well, maybe not…

 

Similar posts