Cybersecurity 4 min read
Agentic AI governance for SMBs: how to apply the Five Eyes guidance without freezing the business
A practical framework to apply the joint Five Eyes guidance on agentic AI in an SMB: scope, agent inventory, identity controls, human approvals, and auditable evidence.
In this article +
On May 4, 2026, the Five Eyes agencies (CISA, NCSC UK, ASD, NZ NCSC, and CSE) published the first joint guidance on agentic AI adoption. The core message is direct: agentic AI amplifies existing weaknesses, so resilience should come before productivity.
For an SMB already running Cursor, Claude Code, Copilot, or n8n + LLM workflows in production, this is not theory. It is the first time an intergovernmental document sets concrete expectations that CISOs, auditors, and clients will start citing.
This guide is not legal advice. It turns a broad framework into controls that a small team can execute in 90 days.
Short answer
An SMB adopting AI agents should start with three deliverables: an inventory of agents and connected tools, a human approval policy, and an auditable evidence plan. Subsequent technical controls focus on agent identity, tool allowlists, data segmentation, and action logging.
What changed on May 4
| Before May 4 | After May 4 |
|---|---|
| Isolated NCSC and ENISA advisories | Coordinated document from five government agencies |
| ”Agentic AI is an opportunity" | "Agentic AI amplifies existing weaknesses” |
| Optional compliance | Referenced in NIS2 and EU AI Act gap analyses |
| Pilots without a control framework | Internal pressure to pause anything ungoverned |
The guidance is not binding on its own, but it opens the door for regulated clients (mid-sized banks, insurers, private healthcare, public sector) to require it as a criterion in RFPs and due diligence.
Minimum governance map
An SMB does not need a large governance program. It needs to be able to answer five questions in less than a day:
- Which agents are running today and who approved them?
- Which tools, data, and systems can they access?
- Which actions require human approval before execution?
- Where are the logs and how long are they retained?
- What happens if an agent acts outside its scope?
If any answer is “I do not know”, that is the first gap to close.
Controls that pay back first
| Control | What it prevents | Pragmatic implementation |
|---|---|---|
| Agent inventory | Shadow agents | Living sheet with owner, tools, data accessed, and use case |
| Dedicated identity per agent | Reusing human credentials | Service account with MFA in the deployment pipeline |
| Tool and MCP allowlist | Agent connected to “everything” | Permissions per role and project, deny by default |
| Human approval for critical actions | Irreversible automated actions | Approve-on-write for deletions, payments, deployments, mass sends |
| Data sandboxing | Sensitive data leakage | Separate environments for customer, HR, and financial data |
| Traceable logging | Impossible post-incident investigation | Immutable record of prompts, tools used, and outputs |
How Five Eyes connects with NIS2 and EU AI Act
| Framework | What it asks for | How a single control covers it |
|---|---|---|
| Five Eyes agentic AI | Resilience, least privilege, human oversight | Inventory + allowlist + approvals |
| NIS2 | Risk management, supply chain, incident logging | Same inventory + auditable logs |
| EU AI Act (use) | Transparency, oversight, activity logging | Approval policy + per-agent logs |
| GDPR | Minimization, legal basis, data subject rights | Data sandboxing + access control |
A single set of technical evidence feeds multiple frameworks. The trap is building parallel documents for each one.
A 90-day plan for an SMB
| Week | Focus | Verifiable deliverable |
|---|---|---|
| 1-2 | Discovery | Inventory of agents, tools, and active MCPs |
| 3-4 | Identity | Dedicated service accounts and MFA in the deployment console |
| 5-6 | Permissions | Allowlist per project and role, deny by default |
| 7-8 | Approvals | Policy for actions that require human confirmation |
| 9-10 | Logs | Centralization and minimum 12-month retention |
| 11-12 | Simulation | ”Agent out of scope” exercise with a runbook |
Common mistakes
- Starting with the document, not with the inventory.
- Confusing “having Claude Code” with “having agent governance”.
- Letting each team connect MCPs without review.
- Using human credentials so the agent “has access to everything”.
- Logging without defining retention or owner.
- Buying a governance tool before knowing which agents exist.
Progress indicators
| Indicator | Good | Bad |
|---|---|---|
| Inventoried agents | Owner and use case defined | Incomplete list with no owner |
| Permissions | Allowlist per project and quarterly review | Broad access “to avoid blocking work” |
| Critical actions | Documented human approval | Full automation with no control |
| Logs | Centralized, auditable, and retained | Scattered or missing |
| Incidents | Runbook rehearsed at least once | Reactive improvisation |
When to invest more
Additional investment is justified when at least one signal appears: explicit regulatory pressure in RFPs, agents accessing personal or financial data, or automated decisions with customer impact. Before that, the six controls in the table above cover most of the risk with limited effort.
Working sources
- Joint Five Eyes guidance on agentic AI adoption (CISA, NCSC UK, ASD, NZ NCSC, CSE), May 4, 2026.
- NIST AI Risk Management Framework as a complementary control reference.
- EU AI Act and obligations for high-risk systems depending on the use context.
- Technical and compliance decisions must be adapted to each company’s sector, size, and processed data.