darxai: engineering, AI, and cybersecurity darxai
Back to blog
Agentic AI governance for SMBs: how to apply the Five Eyes guidance without freezing the business

Cybersecurity 4 min read

Agentic AI governance for SMBs: how to apply the Five Eyes guidance without freezing the business

A practical framework to apply the joint Five Eyes guidance on agentic AI in an SMB: scope, agent inventory, identity controls, human approvals, and auditable evidence.

In this article +

On May 4, 2026, the Five Eyes agencies (CISA, NCSC UK, ASD, NZ NCSC, and CSE) published the first joint guidance on agentic AI adoption. The core message is direct: agentic AI amplifies existing weaknesses, so resilience should come before productivity.

For an SMB already running Cursor, Claude Code, Copilot, or n8n + LLM workflows in production, this is not theory. It is the first time an intergovernmental document sets concrete expectations that CISOs, auditors, and clients will start citing.

This guide is not legal advice. It turns a broad framework into controls that a small team can execute in 90 days.

Short answer

An SMB adopting AI agents should start with three deliverables: an inventory of agents and connected tools, a human approval policy, and an auditable evidence plan. Subsequent technical controls focus on agent identity, tool allowlists, data segmentation, and action logging.

What changed on May 4

Before May 4After May 4
Isolated NCSC and ENISA advisoriesCoordinated document from five government agencies
”Agentic AI is an opportunity""Agentic AI amplifies existing weaknesses”
Optional complianceReferenced in NIS2 and EU AI Act gap analyses
Pilots without a control frameworkInternal pressure to pause anything ungoverned

The guidance is not binding on its own, but it opens the door for regulated clients (mid-sized banks, insurers, private healthcare, public sector) to require it as a criterion in RFPs and due diligence.

Minimum governance map

An SMB does not need a large governance program. It needs to be able to answer five questions in less than a day:

  1. Which agents are running today and who approved them?
  2. Which tools, data, and systems can they access?
  3. Which actions require human approval before execution?
  4. Where are the logs and how long are they retained?
  5. What happens if an agent acts outside its scope?

If any answer is “I do not know”, that is the first gap to close.

Controls that pay back first

ControlWhat it preventsPragmatic implementation
Agent inventoryShadow agentsLiving sheet with owner, tools, data accessed, and use case
Dedicated identity per agentReusing human credentialsService account with MFA in the deployment pipeline
Tool and MCP allowlistAgent connected to “everything”Permissions per role and project, deny by default
Human approval for critical actionsIrreversible automated actionsApprove-on-write for deletions, payments, deployments, mass sends
Data sandboxingSensitive data leakageSeparate environments for customer, HR, and financial data
Traceable loggingImpossible post-incident investigationImmutable record of prompts, tools used, and outputs

How Five Eyes connects with NIS2 and EU AI Act

FrameworkWhat it asks forHow a single control covers it
Five Eyes agentic AIResilience, least privilege, human oversightInventory + allowlist + approvals
NIS2Risk management, supply chain, incident loggingSame inventory + auditable logs
EU AI Act (use)Transparency, oversight, activity loggingApproval policy + per-agent logs
GDPRMinimization, legal basis, data subject rightsData sandboxing + access control

A single set of technical evidence feeds multiple frameworks. The trap is building parallel documents for each one.

A 90-day plan for an SMB

WeekFocusVerifiable deliverable
1-2DiscoveryInventory of agents, tools, and active MCPs
3-4IdentityDedicated service accounts and MFA in the deployment console
5-6PermissionsAllowlist per project and role, deny by default
7-8ApprovalsPolicy for actions that require human confirmation
9-10LogsCentralization and minimum 12-month retention
11-12Simulation”Agent out of scope” exercise with a runbook

Common mistakes

  1. Starting with the document, not with the inventory.
  2. Confusing “having Claude Code” with “having agent governance”.
  3. Letting each team connect MCPs without review.
  4. Using human credentials so the agent “has access to everything”.
  5. Logging without defining retention or owner.
  6. Buying a governance tool before knowing which agents exist.

Progress indicators

IndicatorGoodBad
Inventoried agentsOwner and use case definedIncomplete list with no owner
PermissionsAllowlist per project and quarterly reviewBroad access “to avoid blocking work”
Critical actionsDocumented human approvalFull automation with no control
LogsCentralized, auditable, and retainedScattered or missing
IncidentsRunbook rehearsed at least onceReactive improvisation

When to invest more

Additional investment is justified when at least one signal appears: explicit regulatory pressure in RFPs, agents accessing personal or financial data, or automated decisions with customer impact. Before that, the six controls in the table above cover most of the risk with limited effort.

Working sources

  • Joint Five Eyes guidance on agentic AI adoption (CISA, NCSC UK, ASD, NZ NCSC, CSE), May 4, 2026.
  • NIST AI Risk Management Framework as a complementary control reference.
  • EU AI Act and obligations for high-risk systems depending on the use context.
  • Technical and compliance decisions must be adapted to each company’s sector, size, and processed data.

Next step

Apply cybersecurity and compliance to your company?

We assess, harden, and monitor systems, applications, and processes to reduce risk and support compliance with ENS, NIS2, DORA, and GDPR.