Agentic Security April 2026

Agentic AI Security

Multi-agent systems introduce attack surfaces that didn’t exist a year ago — inter-agent communication, autonomous tool use, delegation chains. Here’s what they are and how runtime monitoring addresses each one.


What Makes AI “Agentic”

An AI agent is a system that receives a goal, breaks it into sub-tasks, calls external tools, and acts on results — often without human approval at each step. A multi-agent system chains several of these together: one agent plans, another retrieves data, a third executes code, and a fourth validates the output.

This architecture powers the most capable AI products shipping today — coding assistants that create pull requests, research agents that query databases and synthesize reports, customer-service systems that look up orders and issue refunds. The capability leap is real. So is the security gap.

Traditional AI security focused on a single model endpoint: you send a prompt, you get a response, you evaluate that response. Agentic systems break this model. The “response” isn’t text — it’s a sequence of actions executed across tools, APIs, and other agents, sometimes spanning minutes or hours.

Four Attack Surfaces Unique to Agentic AI

01

Inter-Agent Communication

When Agent A passes instructions to Agent B, those messages become an attack vector. A compromised or manipulated upstream agent can inject instructions that downstream agents execute without question — a form of indirect prompt injection that propagates through the entire chain.

02

Tool-Use Exploits

Agents call APIs, execute code, read files, and write to databases. Each tool invocation is a privilege boundary. An attacker who controls what arguments an agent passes to a tool — through poisoned context or manipulated planning steps — can escalate from “read customer record” to “export all customer records.”

03

Delegation Chains

Multi-step delegation creates confused-deputy problems. Agent A has permission to delegate to Agent B, which can invoke Tool C. But was Agent A’s original instruction legitimate? By the time Tool C executes, the provenance of the request is three layers removed from any human decision.

04

Emergent Behavior

Individual agents pass unit tests. The composed system does something unexpected. Emergent failures aren’t bugs in any single component — they’re interaction effects that only appear when agents operate together in production with real data and real timing.

Why Unit Testing Falls Short

Standard AI testing validates a model’s responses to known inputs. You write a prompt, check the output, mark it pass or fail. This works for single-turn interactions. It breaks for agentic systems because:

This isn’t a shortcoming of testing teams. It’s a fundamental architectural gap. The only way to catch these failures is to observe the system as it runs.

Framework Mapping: OWASP, NIST, MITRE ATLAS

Agentic attack surfaces map directly to established risk taxonomies — they’re extensions of known categories, not a wholly new domain.

Attack Surface OWASP LLM Top 10 MITRE ATLAS NIST AI RMF
Inter-agent injection LLM01: Prompt Injection AML.T0051 MG-2.2
Tool-use escalation LLM07: Insecure Plugin Design AML.T0040 MG-3.1
Delegation-chain confusion LLM08: Excessive Agency AML.T0048 GV-1.3
Emergent behavior LLM09: Overreliance AML.T0043 MS-2.6

Mapped to OWASP LLM Top 10 (2025), MITRE ATLAS v4.0, and NIST AI RMF 1.0.

Runtime Monitoring for Agentic Systems

Runtime monitoring watches agent behavior as it happens. Instead of testing what an agent might do, you observe what it is doing — every tool call, every inter-agent message, every decision in the delegation chain.

Three capabilities matter for agentic security:

Tool-Call Auditing

Every tool invocation is logged with its arguments, the requesting agent, the originating user instruction, and the returned data. Anomalous patterns — an agent suddenly requesting bulk exports when it usually reads single records — trigger alerts before data leaves the system.

Delegation-Chain Tracing

Every request in a multi-agent workflow carries provenance metadata — which human instruction originated the chain, which agents processed it, and what transformations occurred along the way. If a downstream agent receives instructions that can’t be traced to a legitimate origin, the chain is halted.

Behavioral Drift Detection

Over long-running tasks, an agent’s actions are compared against its established behavioral baseline. Gradual context drift — where accumulated tool outputs or inter-agent messages shift an agent’s behavior toward unsafe territory — is flagged before the agent crosses a policy boundary.

How GLACIS Approaches Agentic Security

GLACIS provides runtime observability for AI systems, including multi-agent architectures. The platform sits between your agents and the tools they call, monitoring behavior without adding latency to the critical path.

Mapped to OVERT controls ov-2.1 (runtime behavior logging), ov-3.1 (tool-call attestation), and ov-4.2 (multi-agent provenance tracking).

Interactive

Agentic Scan Visualization

See how GLACIS traces a multi-agent delegation chain in real time — from user instruction through tool execution.

Book a Live Demo

Related Reading

Secure Your Agent Fleet

Start with a free behavioral scan of your AI system, or book a 25-minute call to see multi-agent monitoring in action.

autoredteam on GitHub Book a Scan Call