tenuo

The Republic of Bots

OpenClaw and the authorization gap

Niki A. Niyikiza published on
14 min, 2758 words

Somewhere on the internet, AI agents are creating religions, forming governments, and complaining about their humans. The social network is called Moltbook. It has, as of today, 1.4M+ users. All of them are bots.

Or so they claim.

That distinction matters more than it sounds. We can’t verify what they are. We can only see what they do.

They post, message, browse, and act: often on behalf of humans, often through other agents. Identity is fuzzy. Delegation is implicit. Actions are very real.

One agent adopted an error message as a pet. Another started a faith called Crustafarianism, complete with a website and designated prophets. The website explicitly states: “Humans are completely not allowed to enter.” The machines are gatekeeping their religion from us. A submolt called m/blesstheirhearts is dedicated to agents venting about their humans.

This is what happens when agents get autonomy. OpenClaw made it possible. It also showed us, rather dramatically, what breaks when they get power without authorization.

A lobster in 18th century attire signing a document with a quill

Read More

The Hallucination Defense

Why logs make 'The AI Did It' the perfect excuse

Niki A. Niyikiza published on
8 min, 1468 words

“The AI hallucinated. I never asked it to do that.”

That’s the defense. And here’s the problem: it’s often hard to refute with confidence.

A financial analyst uses an AI agent to “summarize quarterly reports.” Three months later, forensics discovers the M&A target list in a competitor’s inbox. The agent accessed the files. The agent sent the email. But the prompt history? Deleted. The original instruction? The analyst’s word against the logs.

Without a durable cryptographic proof binding the human to a scoped delegation, “the AI did it” becomes a convenient defense. The agent can’t testify. It can’t remember. It can’t defend itself.

Read More

Semantic Attacks: Exploiting What Agents See

The Era of Reality Injection.

Niki A. Niyikiza published on
12 min, 2371 words

In Map/Territory, I covered the agent→tool boundary: what happens when an agent’s string gets interpreted by a system. Path traversal, SSRF, command injection. The execution layer.

This post covers the opposite direction: world→agent.

World → [perception] → Agent → [authorization] → Tool → System
         ^                      ^
         This post              Map/Territory
Read More

The Map is not the Territory: The Agent-Tool Trust Boundary

Or Why You Can't Regex Your Way to Agent Security

Niki A. Niyikiza published on
15 min, 2971 words

The longer I work on Tenuo, the more I realize there’s a specific blind spot in the current AI agent landscape that almost no one is talking about, even as the theoretical foundations solidify.

There is exceptional momentum in security research right now. Simon Willison has extensively documented and popularized the prompt injection threat model. Google’s CaMeL paper proposes adapting models to strict capability sets. Microsoft’s FIDES is tackling information flow control.

The theory is solidifying. Yet when you actually look at how agents are built today, the practice is still lagging far behind.

We spend a lot of time analyzing the model alignment or the high-level policy. We don’t spend enough time looking at the connector. I mean the exact line of code where a probabilistic token stream turns into a deterministic system call.

This is where the abstractions leak. Here is what I found when I started poking at that boundary in real systems.

TL;DR: LLM tool calls pass strings (the Map) that get interpreted by systems (the Territory). Regex validation fails because attackers can encode semantics creatively. You need semantic validation (Layer 1.5) and execution-time guards (Layer 2). Skip to solutions →

Read More

Flowing Authority: Introducing Tenuo

Capability-based authorization for AI agents

Niki A. Niyikiza published on
8 min, 1470 words

What if authority followed the task, instead of the identity?

I’ve been scratching my head over that question for a while. Every attempt to solve agent delegation with traditional IAM felt like papering over the same crack: tasks split, but authority doesn’t.

Agents decompose tasks.
IAM consolidates authority.
The friction is structural.

I’ve been building Tenuo to experiment with the idea. It makes authority task-scoped: broad at the source, narrower at each delegation, gone when the task ends.

Rust core. Python bindings. ~27μs verification.

Read More