Context Isolation Needs Authority Isolation

The Missing Layer of Agentic Security

Niki A. Niyikiza published on
6 min, 1116 words

Categories: Agentic Security

We isolate what an AI agent knows. Context windows. RAG. Memory scoping.

We haven’t figured out how to isolate what an agent can do.


The Decomposition Paradox

For the last decade, cloud security was defined by decomposition. We broke monoliths into microservices specifically so we could isolate their data and permissions.

Agentic AI architectures are pulling us in two opposite directions.

1. Logically, we are decomposing. Projects like OpenAI Swarm have shown that as tasks get complex, single-loop agents struggle to maintain focus. The solution is decomposition: a Planner breaks down the task, and specialized Workers execute the pieces.

Planner ───► Research ───► Coding ───► Review

“If we try growing the routine with too many different tasks it may start to struggle… this is where we can leverage the notion of multiple routines [agents]… and handoffs.” — OpenAI Cookbook: Orchestrating Agents

2. Infrastructurally, we are consolidating. To make this chain efficient, we cannot spin up a new container for every step. A “Code Interpreter” worker often needs to download a library (requires Internet) and then analyze a dataframe (requires Isolation) within the same memory space. Splitting this across cold-start microservices destroys performance.

The Result: We optimize for speed by creating a “God Object”: a single, hot worker container that stays alive for the whole chain.

While the context narrows at each step (the worker only sees relevant tokens), the authority stays constant (the worker inherits the full IAM role).

The Planner thinks it’s handing off a tiny, safe sub-task (“Just fetch this config”). But the worker’s capabilities don’t change. Same network access. Same database connections. Same mounted secrets. The static role doesn’t know this task is supposed to be read-only.

We have decomposed the Intent, but we have consolidated the Authority.


Static Identity vs. Dynamic Intent

IAM assumes identity is stable. If you are a “Database Worker,” you always need database access.

Agents don’t work that way. The same worker might be tasked to edit a CSS file, then decides to ‘fix’ the user data while it’s at it. It can, so it does.

The permission is attached to what the agent is (the service). Not what it’s doing right now (the flow).

As the LangChain team noted: “agent behavior is nondeterministic… the correct level of access may be heavily context dependent.”


Risk 1: Breaking the Rule of Two

This “God Mode” architecture violates the fundamental laws of safe agent design.

Meta’s “Practical AI Agent Security” paper defines the Agents Rule of Two: An agent must not satisfy more than two of the following conditions:

  1. Processing untrusted input
  2. Accessing sensitive data
  3. Changing state (or communicating externally)

Simon Willison calls this the Lethal Trifecta: if you combine all three, prompt injection becomes a data exfiltration vector.

In a static world, we’d enforce this via Spatial Isolation: Server A processes untrusted input (Internet), Server B handles sensitive data (Database).

But the Generalist Worker collapses this separation.

Because the container requires network egress for its initialization phase (installing dependencies), it retains that access during its execution phase (analyzing financial data). The privilege persists longer than the need.

The sensitive data and the exfiltration channel now coexist in the same container.

You have violated the Rule of Two, not because of a configuration error, but because static IAM lacks the vocabulary to express: “Internet for setup, then silence for analysis.”


Risk 2: The Runaway Loop

The Rule of Two guards against exfiltration, not exhaustion.

Agents don’t have to be malicious to be dangerous. They just have to be clumsy.

Consider a DevOps agent tasked with launching a server. If it hallucinates a retry loop on RunInstances, IAM sees 50 valid requests and approves them all.

You didn’t get hacked. A flow-blind implementation got you a $50,000 bill and a Slack message from Finance.

IAM checks whether the request is allowed. It has no concept of once.

There is no primitive for: “This was permissible for this task, but only once, and only here.”

Static identity was never designed to regulate dynamic intention.


Subtraction = Intelligence

In an engineering deep-dive on the Manus agent, Lance Martin shared an insight about tool selection:

“If you allow user-configurable tools… the model is more likely to select the wrong action… In short, your heavily armed agent gets dumber.”

Their fix: a state machine that dynamically restricts available tools at each step. If the agent is in the “Browsing” phase, it physically cannot call the “Bash” tool. Fewer options, better decisions.

They discovered that subtraction improves capability.

Subtraction improves safety too. The less an agent can do at any given moment, the fewer side quests it can dream up.

If subtraction makes agents smarter and safer, we need infrastructure that can express it.


What’s Missing: Flow-Aware Authorization

We need authorization that understands flow, not just identity.

The primitive doesn’t exist in the standard cloud stack:

  • IAM is static. Decided at deploy time.
  • OAuth is user-scoped. Doesn’t understand task decomposition.
  • OPA is stateless. Can’t track delegation chains.

None of them can express:

“This agent could do X a minute ago, because that was the setup phase. It cannot do X now, because this is the analysis phase.”

That’s the gap.

Google DeepMind’s recent CaMeL paper tackles a related problem: capability-based data flow constraints that prevent prompt injections from exfiltrating sensitive information. That addresses data flow within an agent.

But there is another layer: what happens when agents delegate to other agents? How do you pass attenuated authority between independent containers, across network boundaries, and trust contexts? Every boundary crossing is a trust decision. Fresh authority, scoped to the task, not a copy of the caller’s permissions or a deploy-time tattoo it’s stuck with.


CaMeL is a good intellectual foundation for securing the data flow. We also need to secure the authority flow.

We need an authorization protocol that follows the flow, not just the identity. Each phase gets only the capabilities it needs, scoped to the task, and expiring with it.

It might look like this:

Planner
  authority: { instances: *, network: *, data: * }

    ↓ narrows to

Orchestrator  
  authority: { instances: staging-*, network: allow, data: financials.csv }

    ├── Phase 1 ──→  Worker [Setup]
    │                capabilities: { instances: none, network: allow egress, data: none }
    │                → Can pip install. No data access.
    └── Phase 2 ──→  Worker [Analysis]
                     capabilities: { instances: none, network: deny, data: financials.csv }
                     → Air-gapped. Can read file. Cannot exfiltrate.

Same worker. Sequential phases. Flow-aware authorization.

We’ve built Context Isolation. We’re missing Authority Isolation.

Until we fix that, we’re running God-mode improvisers inside production environments with privileges that outlive their purpose.

This has become my weekend rabbit hole. I’ve been sketching out what it would take to close this gap.

As an industry, we are building agents that improvise. That’s their power.

Their authority shouldn’t.