Only 35% of UK organisations have formal AI policies, yet 62% report shadow AI—unapproved, unmonitored tool usage among employees. The gap between AI adoption and governance creates liability, compliance risk, and vulnerability to data breaches. A documented AI policy is no longer optional; it is foundational to operating responsibly in a regulated environment. This guide covers what your policy must contain, why each component matters, and how to implement it in practice.
The case for a formal AI policy has shifted from aspirational to urgent. Three forces converge:
Regulatory pressure is hardening. The Information Commissioner's Office (ICO) now expects organisations processing personal data through AI to maintain documented policies demonstrating lawful processing and fairness. The EU AI Act—operational since December 2023—imposes specific policy requirements for any organisation serving EU customers or processing EU resident data. Sector regulators (FCA, MHRA, PRA) are explicitly requiring AI governance frameworks. Organisations without documented policies face investigation, fines, and reputational damage.
Risk and liability are materialising. AI mishaps cost money: discrimination claims from biased hiring algorithms, IP claims from unlicensed training data, privacy breaches from misconfigured generative AI platforms, and regulatory sanctions. An undocumented policy leaves your organisation unable to demonstrate due diligence if an incident occurs. A documented policy proves governance was attempted and reduces financial and legal exposure.
Employee behaviour is outpacing governance. 62% of UK workers use unapproved AI tools at work—ChatGPT, Claude, Midjourney—without IT oversight. This shadow AI exposes confidential data, creates IP risks, and introduces unvetted third-party systems into your network. A clear policy signals expectations and creates accountability.
A defensible AI policy contains ten core sections. Each addresses a specific governance objective. Work through them in order—later sections depend on decisions made in earlier ones.
Begin with a statement of intent. Define what the policy covers—which roles, systems, data types, and use cases fall within scope. Explicitly state who owns the policy (e.g., Chief Data Officer or Governance Board) and when it will be reviewed.
Why this matters: Scope clarity prevents ambiguity. A policy that covers "all AI tools" is unmanageable; a policy that covers "customer-facing algorithms, internal automation, and third-party AI services" is actionable and measurable.
Template language:
"This policy applies to all AI systems and tools—including generative AI platforms, machine learning models, and third-party AI services—used in decision-making, customer-facing operations, or processing of personal data. It applies to all employees, contractors, and third-party partners. The policy owner is the [Chief Data Officer / Data Governance Board]. This policy will be reviewed and updated annually or when significant regulatory changes occur."
Define key terms so there is no room for interpretation. What counts as "AI"? When does a tool become a "high-risk system"? What is "personal data" in your context? Create a simple taxonomy that teams can reference.
Why this matters: Without shared definitions, different departments will classify the same tool differently, and enforcement becomes impossible. A clear taxonomy helps legal, technology, and business teams align.
Template language:
"AI System: Any software that uses machine learning, neural networks, generative models, or algorithmic decision-making to process data and generate outputs without human instruction for each decision. High-Risk AI: AI systems that influence employment decisions, credit decisions, benefits eligibility, criminal justice outcomes, or other legally protected decisions. Also includes any AI processing biometric data or large-scale personal data. Approved Tools: AI tools (e.g. ChatGPT Enterprise, GitHub Copilot) that have passed security, IP, and compliance review and are listed in the approved tools register. Shadow AI: Use of unapproved AI tools or services without IT, legal, or governance approval."
Articulate the principles driving your AI governance. Do you prioritise transparency, fairness, accuracy, privacy, or accountability? Be specific and honest about your priorities. Principles should align with your organisation's values and regulatory obligations.
Why this matters: Principles guide decision-making when ambiguous situations arise. They signal to employees, customers, and regulators what matters to your organisation. Vague principles ("be responsible with AI") are worthless; specific principles ("all customer-facing algorithms must be auditable by customers") are actionable.
Template language:
"We will: • Use AI to augment human judgment, not replace it in high-stakes decisions • Process personal data only with lawful basis and informed consent where required • Test AI systems for bias, accuracy, and fairness before deployment • Maintain human oversight and override capability for all high-risk systems • Disclose to customers, employees, and regulators when AI influences decisions that affect them • Prioritise data security and minimise data retention to the shortest necessary period • Regularly audit AI systems and update policies as technology and regulation evolve"
Maintain a living inventory of approved AI tools, services, and platforms. For each, document: tool name, vendor, use case, security classification, data classification, date approved, and responsible owner. Revisit this quarterly.
Why this matters: Regulators will ask: "What AI systems do you use?" An approved tools register proves you know what is running in your organisation and have made deliberate, auditable choices. Shadow AI thrives in the absence of this inventory.
Template language:
"Approved Tools Register | Tool | Vendor | Use Case | Data Classification | Security Review | Approved Date | Owner | |------|--------|----------|---------------------|-----------------|---------------|-------| | ChatGPT Enterprise | OpenAI | Content drafting, ideation | Internal only (no customer data) | Passed | Jan 2026 | Marketing | | GitHub Copilot | Microsoft | Code generation | Internal only | Passed | Jan 2026 | Engineering | | Claude API | Anthropic | Customer support escalation summarisation | Customer data (encrypted) | Passed | Feb 2026 | Support |"
Define a simple classification system for AI systems: low-risk, medium-risk, and high-risk. Use clear criteria—does the system influence employment, credit, or legal decisions? Does it process biometric or large-scale personal data? High-risk systems require more scrutiny, documentation, and oversight.
Why this matters: Not all AI is equally risky. Using internal analytics is low-risk; automating hiring decisions is high-risk. A risk classification framework ensures governance effort is proportionate and efficient.
Template language:
"Risk Classification LOW-RISK: Internal analytics, content generation, ideation tools (e.g., ChatGPT for brainstorming). No personal data. No customer-facing output. No legal or reputational consequence if accuracy is poor. MEDIUM-RISK: Internal process automation, customer-facing recommendations, internal employee analytics. Limited personal data. Minor customer impact if failure occurs. Requires documentation, testing, and audit trail. HIGH-RISK: Recruitment automation, credit or benefits decisions, biometric systems, large-scale customer data processing. Significant legal, compliance, or reputational consequence. Requires full risk assessment, bias testing, human oversight, and regular audits."
State your data minimisation principles: AI systems should use only the data necessary for their stated purpose. Define rules for data access, retention, deletion, and privacy impact assessment (PIA) requirements. Specify conditions under which AI systems can and cannot process personal data.
Why this matters: AI systems often demand more data than needed. Minimisation principles protect privacy and reduce liability. They also prevent regulators from viewing your organisation as reckless with personal data.
Template language:
"Data Minimisation AI systems shall: • Use only data strictly necessary for their stated, documented purpose • Not process special categories of data (ethnicity, religion, sexual orientation) unless explicitly justified and approved by Data Protection Officer • Retain personal data for no longer than necessary to fulfil that purpose • Include deletion schedules and technical controls to enforce data retention limits • Provide a privacy impact assessment (PIA) before processing new categories of personal data • Provide individuals with transparency about AI decision-making if it affects them—including the right to human review or override"
For medium- and high-risk systems, require bias audits before deployment and annually thereafter. Test for disparate impact across protected characteristics (gender, age, ethnicity if applicable) and document results. Set thresholds: acceptable error rates, performance parity standards. Include a remediation process if bias is detected.
Why this matters: AI discrimination lawsuits are rising. Bias audits reduce legal exposure and demonstrate good faith to regulators. Audits also improve model quality—biased models are usually less accurate overall.
Template language:
"Bias Audit Requirements All medium- and high-risk AI systems shall: • Undergo a fairness audit before deployment • Document protected characteristics potentially affected (gender, age, ethnic background, disability status) • Test for disparate impact: error rates, false positive/negative rates, and approval rates must not vary by more than 5 percentage points across groups • If disparate impact is detected, document the business justification or commit to remediation • Repeat audits annually or whenever the model is significantly retrained • Document all bias audits and remediation steps in the governance record"
Define when and how you must disclose AI use to stakeholders. Employees should know when AI is screening their applications or monitoring their work. Customers should know when recommendations are AI-driven. Regulators should be informed of high-risk systems. Write disclosure templates so teams know what to communicate.
Why this matters: Transparency builds trust and satisfies regulatory expectations. Surprise AI discoveries damage reputation. Clear disclosure templates prevent teams from avoiding the conversation.
Template language:
"Transparency Obligations • High-risk AI systems: notify affected individuals and regulators of AI involvement before decisions are finalised • Customer-facing AI: clearly label recommendations as AI-generated; offer alternative human-reviewed options • Internal AI: publish the approved tools register and notify teams of new approved tools quarterly • Job applicants: disclose if AI is used in candidate screening, and provide a contact for appeal or human review • Data subjects: upon request, explain how personal data is processed by AI and whether adverse decisions are made by AI alone"
Name roles and decision rights. Who approves new AI tools? Who investigates incidents? Who owns the approved tools register? Create a governance body (e.g., AI Governance Board) that meets quarterly. Include stakeholders from data protection, legal, technology, operations, and relevant business units. Define an escalation process for high-risk decisions.
Why this matters: Without clear ownership, governance becomes nobody's job and everything falls through cracks. A named Governance Board signals commitment to regulators and builds accountability internally.
Template language:
"Governance Structure Policy Owner: Chief Data Officer (or Chief Technology Officer) — responsible for annual review, updates, and regulatory liaison AI Governance Board: meets quarterly; includes CDO, Legal, Data Protection Officer, IT Security, relevant business unit heads. Approves new high-risk AI systems and reviews audit results. Incident Response: incidents (bias discovered, data breach, regulatory enquiry) escalated to Governance Board within 48 hours. Trigger investigation and remediation. Compliance Audit: independent third party conducts annual audit of AI systems against this policy. Reports to Governance Board."
Define an incident process. What counts as an AI incident? (Bias detected, data breach, model failure, regulatory enquiry.) Who must be notified? Within what timeframe? What information must be documented? Create a simple incident report template so teams know how to escalate.
Why this matters: Incident response proves governance is active and enforceable. Regulators expect a documented process. Timely reporting reduces liability and prevents cover-ups.
Template language:
"Incident Response Protocol AI Incident: any event where an AI system causes or risks harm (bias detected, data breach, regulatory enquiry, model failure, accuracy drop, customer complaint). Reporting Timeline: • Within 24 hours: notify AI Governance Board and Data Protection Officer • Within 5 working days: complete incident report (see template below) • Within 30 days: decide remediation and communicate to affected parties if required Incident Report Template: - Date and time of discovery - System affected (name, vendor, risk classification) - Description of incident (what happened, who was affected) - Root cause (if known) - Actions taken immediately - Regulatory notification required? (Yes/No) - Remediation plan - Review date"
Having a policy on paper is not the same as having embedded governance. Here is how to move from policy to practice:
Audit your current AI tool stack. Survey teams: what tools are you using, approved or otherwise? Build the approved tools register. For each tool, gather: vendor details, data classification, security review status, business owner. Create the AI Governance Board and schedule a monthly (or quarterly) cadence. Brief the board on the policy. Identify policy owner and incident response leads.
Categorise each tool in the register as low-, medium-, or high-risk using your classification framework. For high-risk systems, commission a formal security review (data flows, encryption, third-party terms, compliance). For medium-risk systems, gather risk assessment documents from owners. For low-risk systems, document basic usage and confirm no personal data flows.
Identify all high-risk AI systems (recruitment algorithms, credit models, benefits decisions). Commission bias audits by internal data science teams or external specialists. Agree on bias thresholds with business owners (e.g., error rate parity within 5 percentage points across groups). Begin testing.
For each medium- and high-risk system, map data flows: where does data come from, where is it stored, who has access, how long is it retained, where is it deleted? Draft privacy impact assessments (PIAs) documenting risk and mitigation. Update your policy's data minimisation section based on findings.
Write transparency language for each use case: job applicant notifications, customer recommendation disclosures, internal employee communications. Pilot the language with one business unit. Gather feedback. Refine and roll out.
Run a simulation: inject a fake incident scenario (e.g., "bias detected in hiring algorithm"). Test whether teams know how to escalate, document, and notify stakeholders. Train the AI Governance Board and incident response leads on the policy and their responsibilities. Update the incident report template based on the simulation.
Monitor new tool requests against the approved register. Review the register quarterly and update approvals. Conduct annual audits of high-risk systems. Capture feedback from Governance Board meetings and update the policy. Prepare an annual compliance report for executive leadership and the board.
An AI policy is only effective if it is lived. A policy sitting in a shared folder, unread by teams and unenforced by leadership, provides no protection and offends regulators. Here is what matters:
1. Governance is a business decision, not a compliance checkbox. AI governance directly reduces liability, prevents costly incidents, and improves model quality through bias testing. Frame it as risk management and efficiency, not bureaucracy.
2. Start with your current stack, not a theoretical ideal. An approved tools register that lists every tool in use—including shadow AI—is more valuable than a pristine policy that ignores reality. Acknowledge what is happening, then put guardrails around it.
3. Match governance intensity to risk. Low-risk tools need light oversight; high-risk systems need rigorous governance. Proportionate governance gains buy-in and prevents false compliance.
4. Assign ownership and create accountability. A named Governance Board with clear decision rights is the difference between a policy that is enforced and one that is ignored.
5. Transparency and disclosure are non-negotiable. Regulators, employees, and customers increasingly expect to know when AI influences decisions. Building disclosure into your policy now avoids costly retrofitting later.
6. Review and iterate continuously. AI and regulation are moving fast. Your policy will be outdated within 12 months. Build annual review and quarterly board meetings into your governance cadence.
7. Measure and report outcomes. Track incidents, bias audit results, tool approval timelines, and security review compliance. Report these metrics to leadership and the board. What gets measured gets managed.
A comprehensive, documented AI policy—actively enforced and regularly reviewed—moves your organisation from defensive compliance to principled governance. It reduces risk, builds stakeholder trust, and positions your organisation as a responsible AI operator in a regulated world.
The cost of governance is real but manageable. The cost of not governing—reputational damage, regulatory fines, litigation, and customer loss—is vastly higher. An AI policy is not optional in 2026. The question is not whether you need one; it is whether you can afford to move from policy on paper to embedded governance in practice.
EU AI Act compliance requirements for UK businesses
governance best practices that complement policy frameworks
AI governance risk and compliance requirements
AI data governance policies and frameworks
governing autonomous AI agents
EU AI Act high-risk requirements