Skip to main content
Panopticore
← Insights
April 25, 2026 · Dave Medeiros · Governance

Who Governs the Agent at Runtime?

Amit Agarwal asked a question on his Substack that cuts to the center of the agent governance problem: who governs the agent at runtime?

It is the right question because it exposes a gap most governance frameworks avoid. Pre-deployment review, model evaluation, red-teaming, prompt engineering: all of these happen before the agent acts. They are necessary. They are not sufficient. The consequential moment is not when the agent is designed, tested, or deployed. It is when the agent is about to do something.

The runtime gap

Most governance today targets the model or the prompt. Guardrails constrain what the model can say. Evaluations measure whether the model behaves as expected under test conditions. Alignment research tries to make the model want the right things.

None of this governs what the agent does at runtime. An agent can receive a well-formed prompt, generate a reasonable response, and then execute a tool call that deletes production data. The prompt was fine. The model behaved as expected. The action was catastrophic. Prompt-level governance did not and could not intercept it, because the action occurs downstream of the model’s output, at the interface between the agent and its tools.

This is the gap Amit’s question surfaces. And it is the gap that requires a fundamentally different architectural answer.

Why the answer is infrastructure, not middleware

If runtime governance matters, then the governance layer must be present at runtime. The question is where it sits.

If it sits inside the agent’s process, as middleware in the agent’s framework, it shares the agent’s trust boundary. A compromised dependency, a sufficiently capable agent, or a malicious actor with access to the process can bypass it. The governance mechanism and the entity being governed are the same entity. We documented this in our NIST submission as the bypass-by-architecture problem.

If it sits outside the agent’s process, at the network boundary, in a separate trust domain, then the agent cannot modify, disable, or bypass it. Policy is evaluated deterministically against observable actions. Evidence is produced independently. The governance layer exists in a boundary the agent does not control.

This is where Panopticore sits. Not because out-of-process governance is novel as a concept. Firewalls, reverse proxies, and external authentication services have used this pattern for decades. But because no one had applied it to AI agent governance with a self-contained, offline-verifiable evidentiary artifact.

What runtime governance produces

The answer to “who governs the agent at runtime?” is not a model, not a framework, and not a middleware library. It is an independent system that can answer three questions after the fact:

  1. What did the agent do?
  2. Which policies were in effect?
  3. Were those policies actually enforced?

The answer is an Evidence Binder: a cryptographically signed, self-contained record that any third party can verify offline, without vendor access, without a dashboard login, and without trusting the system that produced it.

That is what runtime governance looks like when you take the question seriously.

Dave Medeiros
Dave Medeiros
Founder & CEO, Panoptic Systems, Inc.
LinkedIn →

Get new Insights in your inbox.

No spam. Unsubscribe anytime.