We’ve moved fast. AI agents are live, AI-driven content authoring is coming soon and we’re not stopping there.
Often I think of being CEO as a snow plough. You set the course, stop at nothing to deliver, smooth out the roadblocks and then ensure you have a fantastic team to execute each initiative, as you continue to stride forward.
In order to deliver these AI initiatives, completing proof of concept (PoC) work has been critical to us at 5app.
We used a PoC to refine Vee, our AI agent, allowing us to iterate quickly and validate real-world effectiveness before scaling. The same approach guided the development of AI content authoring, helping us fine-tune functionality, understand user needs and avoid costly missteps. We see PoCs as an essential part of our development arsenal, especially for major initiatives.
But PoCs aren’t just about testing feasibility. They challenge assumptions and force decisions based on evidence rather than hype. Will this drive return on investment? Does it justify the resources? Will it improve performance, learning and engagement?
Don’t just take my word for it:
The value of the PoC is clear to us. By iterating early and often, we’ve minimised risk, uncovered new opportunities and ensured that what we build actually delivers. But PoCs also have limitations. A controlled test environment doesn’t always reflect the complexity of large-scale deployment. Success in a pilot doesn’t guarantee real-world impact. That’s why we treat them as checkpoints, not conclusions.
One thing I have a bee in my bonnet about currently is... with all this investment into AI within the learning sector, how do we know we’re not just innovating for the sake of it? Are we delivering a measurable improvement to learners and educators?
With the snow plough on full steam ahead, I see another PoC on the horizon, to see if we cannot tackle this major blind spot once and for all. Watch this space to find out what that could look like...