Blogs, News & Insights

L&D is working hard. It’s time for tech to catch up

Written by Philip Huthwaite | 08 May 2025 08:50:32 Z

Over the past few months, I’ve spoken with L&D leaders from across sectors including non-profits, government, retail, pharma, finance, manufacturing, tech and more. Those delivering the learning function directly within a corporation, those supporting it externally and leadership figures who approve the expenditure. These weren’t theoretical chats. They were frank conversations about what’s working, what’s not, and what people have stopped pretending is fine.

I asked one consistent question: how are you measuring whether learning is actually working?

Here’s what I heard:

“We report completions because it’s the cleanest data.”

“Annual surveys? Too infrequent. Too subjective.”

“We’ve tried to understand the real impact. It’s too hard to do consistently, so we’ve resorted to proxy measures that rely on too many assumptions.”

That last one stuck with me. It was honest. It came from someone who deeply cares about getting it right.

Here’s the thing: effort isn’t the problem. No one’s ignoring the measurement gap. People are trying. Hard.

 

What people are actually doing

Some teams are experimenting with A/B testing. They’re splitting cohorts, running training with one group, and measuring shifts in metrics like attrition or promotion rates. One leader tracked customer service KPIs before and after training. Another used CRM data to trace revenue outcomes linked to learning programs.

Simulations came up multiple times, especially for managers: 

“They’re the closest thing we’ve got to observing real behaviour”.

Some use tests to measure soft skill retention in the case of preventing phishing scams: not just who avoids clicking, but who actually flags the threat. Others rely on pre- and post-confidence benchmarks or field-task follow-ups.

In the non-profit space, teams have linked learning to employment outcomes. One programme tracked how many learners found jobs, stayed in them, or progressed after participating in targeted upskilling.

It’s creative work. But almost all of it is manual. Hard to scale. Often hard to sustain.

 

Soft skills are still the blind spot

Everyone agrees they matter. But few can prove whether they’re improving.

The most common signals? Manager impressions. Occasional 360s. Maybe a dip in complaints. One leader put it simply:

“We say we value empathy and leadership. But we can’t measure them”.

Even where the intent is strong, attribution is muddy. Behaviour change can come from the training, but also from a new manager, a change in team dynamics, or just time.

Some teams have stopped pushing for deeper data. Not because they don’t care. Because they’re tired of fighting systems that weren’t built for this.

“If we dig too deep, we might find the learning didn’t move the needle.”

That’s not a sign of apathy. That’s a sign the tools haven’t caught up.

 

What we’re building toward

This is the space we’re focused on. A way to see real skill growth, captured from everyday work.

The goal isn’t to replace what’s already working. It’s to make that work easier. Faster. Visible. To move from “we think it helped” to “here’s what changed, and here’s how we know.”

To everyone who shared openly during these conversations, a big thank you. You’ve already proven that better is possible.

We’re almost ready to show the world what that looks like.