De meeste AI-trainingen meten het verkeerde. Ze tellen aanwezigen, meten tevredenheidsscores en rapporteren over het aantal afgeronde modules. Geen van deze metrics vertelt u of uw mensen daadwerkelijk beter zijn geworden in hun werk.
The AI training market has a quality problem. A significant proportion of AI training programmes that organisations purchase consist primarily of vendor demonstrations, capability overviews, and inspirational case studies — followed by a Q&A session and a certificate of completion. Participants leave with a higher general awareness of AI. They return to work and use AI exactly as much as they did before. Effective AI training looks very different.
The Quality Standard: Can They Do It After?
De kwaliteitsnorm: kunnen ze het daarna?
The only meaningful quality measure for AI training is whether participants can do something useful with AI in their actual work that they couldn't do before the training. Everything else — participant satisfaction scores, trainer ratings, content comprehensiveness — is a proxy that may or may not correlate with this outcome.
This standard has immediate design implications. If participants can't demonstrate a specific new capability at the end of the session, the session has not achieved its objective. Training design must work backwards from this standard: what specific things should participants be able to do? What practice is necessary for them to achieve that? How much time does that take?
The Non-Negotiable Elements of Good AI Training
De onmisbare elementen van goede AI-training
Based on our programme design and delivery experience, effective AI training reliably includes:
- Hands-on practice time: Minimum 60% of total session time should be active practice, not passive observation. If the trainer is talking for most of the session, it isn't training — it's a presentation.
- Real tools, real tasks: Practice must happen in the actual AI tools participants will use in their work, on tasks drawn from their actual responsibilities.
- Prompt development: Every participant should leave with a set of tested, effective prompts for their most important AI use cases. This is the most tangible, immediately useful output of any AI training.
- Failure mode literacy: Participants need to know what AI gets wrong, not just what it gets right. Understanding failure modes is essential for appropriate trust calibration.
- Post-training support: Some form of support structure in the days and weeks after training dramatically improves sustained behaviour change. This can be a peer practice group, a dedicated communication channel for questions, or short follow-up sessions.
Red Flags in AI Training Proposals
Waarschuwingssignalen in AI-trainingsvoorstellen
When evaluating AI training proposals, the following are warning signs that the programme is likely to be high in awareness and low in capability:
- No mention of hands-on practice or participant exercises
- Generic content with no customisation to your organisation, sector, or role types
- Measurement criteria that focus on completion and satisfaction rather than capability outcomes
- No post-training support or follow-up mechanism
- Very short sessions (less than half a day) for the first role-specific training
- Content that is primarily about AI in general rather than the specific AI tools you actually use
Measuring Training Effectiveness
Trainingseffectiviteit meten
Good AI training should be evaluated at multiple levels: immediate reaction (did participants find it useful and applicable?), learning (can they demonstrate the target capabilities?), behaviour change (are they using AI differently in their work 30 and 60 days later?), and results (is there measurable productivity improvement in trained vs. untrained roles?).
Most organisations evaluate only the first level. The organisations that measure at all four levels consistently discover that programmes that score well on participant satisfaction don't always produce behaviour change — and redesign their training investments accordingly.
The bar for AI training should be the same bar we apply to any professional development investment: does it change what people are able to do? Training that doesn't meet this bar is not a bargain at any price — it consumes time and budget that could have been spent on capability development that actually works.
V&VZ designs all training programmes from the capability outcome backwards. We don't deliver awareness sessions and call them training. Every programme we run is measured by what participants can do at the end — and we design, iterate, and invest in our programmes to ensure that standard is met. We'd be glad to share our approach in more detail.