Organisaties die wachten tot ze een perfect AI-beleid hebben voordat ze beginnen, wachten te lang. Medewerkers gebruiken al AI-tools — soms met goede, soms met riskante resultaten. Een werkend beleid begint met erkennen wat al gebeurt.
Most organisations that have an internal AI policy have a document that was written 18 months ago, approved by legal, communicated once, and is now collecting digital dust while employees make daily AI decisions without reference to it. The pace of AI development has made policy-by-document largely inadequate. Effective AI governance is not a document — it's a system of decisions, frameworks, and conversations that keeps pace with the technology.
Why Most AI Policies Fail
Waarom de meeste AI-beleidsregels mislukken
The typical corporate AI policy fails on three dimensions: it is too generic to be actionable (applies to AI in general rather than the specific tools and use cases your organisation actually uses), it is too static to be current (written for the AI landscape of 18 months ago, not today's), and it has no adoption infrastructure (published but never embedded in training, onboarding, or operational processes).
A policy that employees encounter only when something goes wrong is not a policy — it's a liability shield. Effective AI governance means employees know what the policy says before they need it, understand why the rules exist, and have practical guidance for the decisions they encounter daily.
The Right Scope for an Internal AI Policy
De juiste reikwijdte voor een intern AI-beleid
An effective AI policy covers the decisions that employees actually face, in terms they can apply. For most organisations, this means clarity on:
- Approved tools: Which AI tools are sanctioned for use, in what contexts, with what data? This is the most operationally important section and requires regular updating as the tool landscape evolves.
- Data handling: What data categories can be shared with AI tools? What requires anonymisation or cannot be shared at all? GDPR and client confidentiality implications must be addressed explicitly.
- Output accountability: Who is responsible for verifying AI outputs before they are used in client-facing or consequential internal contexts? The answer should always be a human.
- Disclosure: When must the use of AI be disclosed — to clients, to regulators, in published work? What constitutes acceptable AI assistance versus unacceptable attribution risk?
- Escalation: What should an employee do if they're uncertain whether a particular AI use is appropriate? Who do they ask?
The EU AI Act: What Your Policy Must Now Address
De EU AI Act: wat uw beleid nu moet adresseren
The EU AI Act introduces mandatory requirements for organisations that deploy or use AI systems in regulated categories. For many professional services firms, financial institutions, and public sector organisations, some of their AI use cases will fall under the Act's high-risk or transparency requirements. An internal AI policy that doesn't address these obligations is not legally adequate.
Specifically, the policy should address: how your organisation identifies which AI uses fall under which risk categories, what risk assessment and documentation requirements apply, transparency obligations towards individuals subject to AI-assisted decisions, and human oversight requirements for high-risk AI systems. These are not aspirational — they are legal requirements with implementation deadlines that are now in force.
Making the Policy Live: Adoption Infrastructure
Het beleid levend houden: adoptie-infrastructuur
The most important thing you can do with your AI policy is connect it to the moments when it matters. This means: including it in onboarding for all new employees, incorporating it into role-specific AI training rather than treating it as a standalone document, creating a practical decision guide for the most common edge cases employees face, establishing a visible channel for AI policy questions, and reviewing and updating the policy on a defined schedule (at minimum, annually).
Organisations that treat AI governance as a living programme rather than a published document maintain relevance and compliance as the technology evolves. Those that treat it as a one-time compliance exercise will find themselves repeatedly catching up after incidents.
An internal AI policy that works in practice is simpler in some ways than the comprehensive legal document many organisations have produced — and more demanding in others. It requires ongoing attention, regular updating, and genuine embedding in everyday practice. The organisations that invest in this infrastructure will find AI governance a source of competitive confidence. Those that don't will find it a source of periodic crises.
Visser & Van Zon supports clients in developing practical AI governance frameworks, including internal AI policies, EU AI Act compliance assessments, and employee AI governance training. If your current AI policy is more document than programme, we'd be glad to help you change that.