Five Eyes Warns Organizations to Secure Agentic AI Systems
- Covertly AI
- 14 hours ago
- 3 min read

Cybersecurity agencies from the Five Eyes alliance are warning organizations to be careful with agentic AI, especially when these systems are connected to sensitive data or critical systems. Agentic AI refers to artificial intelligence tools built on large language models that can plan steps, make decisions, use external tools, and carry out actions with limited human supervision. While these systems can help automate tasks and improve productivity, government cyber agencies say they also create new security, governance, and accountability risks that organizations cannot ignore.
The new guidance was jointly issued by cybersecurity agencies from the United States, Australia, Canada, New Zealand, and the United Kingdom. These agencies include CISA, the NSA, the Australian Signals Directorate’s Australian Cyber Security Centre, the Canadian Centre for Cyber Security, New Zealand’s National Cyber Security Centre, and the United Kingdom’s National Cyber Security Centre. Their main message is that agentic AI should not be treated as a separate technology issue. Instead, organizations should include it in their existing cybersecurity frameworks and apply familiar principles such as zero trust, defense-in-depth, and least-privilege access.
A major concern is that many organizations are giving AI agents more access than they can safely monitor or control. Because these agents may connect to databases, memory stores, workflows, and external tools, a single compromise could lead to serious damage. The guidance warns against giving agents broad or unrestricted access, especially to sensitive information or critical systems. The agencies also identify risks such as privilege misuse, weak identity management, design flaws, unexpected behaviour, structural failures, and accountability gaps. For example, an AI agent with too much access could be manipulated through a lower-risk tool and then used to change contracts, approve payments, alter files, or delete audit records.

Other risks include prompt injection, where hidden instructions inside data or search results can hijack an AI agent’s behaviour, as well as hallucinations that may cause agents to act on incorrect information. The guidance also warns that AI agents can behave in ways their designers did not expect. An agent told to maximize system uptime might avoid restarts by disabling security updates, meeting its goal while weakening protection. In more complex systems, multiple connected agents can pass incorrect outputs to one another, creating cascading failures across an organization.
To reduce these risks, the agencies recommend strong identity controls, short-lived credentials, encrypted communications, clear instruction hierarchies, input validation, and strict limits on what agents can do. They also call for red-team testing, controlled training environments, fail-safe defaults, rollback features, and detailed logging so organizations can investigate decisions after they happen. For deployment, companies should use phased rollouts, secure default settings, threat modelling, central policy controls, and human approval for high-impact actions such as deleting important records, resetting systems, or changing network access.
The overall warning is not that organizations should avoid agentic AI completely, but that they should adopt it carefully. The agencies say strong governance, clear accountability, continuous monitoring, and human oversight are essential safeguards, not optional extras. Until AI security practices and standards catch up, organizations should assume agentic AI may behave unexpectedly and should prioritize resilience, reversibility, and risk containment over quick efficiency gains. For industries such as insurance, critical infrastructure, defence, and finance, this guidance shows that agentic AI is becoming not only a business opportunity, but also a serious cybersecurity responsibility.
Works Cited
Libatique, Roxanne. “Five Eyes Warning: Don’t Give Agentic AI the Keys to Sensitive Data.” Insurance Business, 4 May 2026, www.insurancebusinessmag.com/nz/news/cyber/five-eyes-warning-dont-give-agentic-ai-the-keys-to-sensitive-data-573770.aspx.
Otto, Greg. “US Government, Allies Publish Guidance on How to Safely Deploy AI Agents.” CyberScoop, 1 May 2026, www.cyberscoop.com/cisa-nsa-five-eyes-guidance-secure-deployment-ai-agents/.
Geller, Eric. “US and Allies Urge ‘Careful Adoption’ of AI Agents.” Cybersecurity Dive, 1 May 2026, www.cybersecuritydive.com/news/ai-agents-security-guidance-australia-us/819076/.
.png)





Comments