Skip to main content
Panopticore
← Insights
April 19, 2026 · Dave Medeiros · Governance

What We Told NIST About AI Agent Security

In February 2026, we submitted a formal comment to the National Institute of Standards and Technology in response to their Request for Information on Security Considerations for Artificial Intelligence Agents (Docket NIST-2025-0035, 91 FR 698).

The submission makes four arguments. Here they are in plain language, with the key passages quoted directly.

1. The evidence gap

The biggest security problem in deployed AI agent systems is not prompt injection or adversarial attacks on the model. It is the absence of a verifiable record of what the agent actually did.

“The most consequential security gap in deployed AI agent systems today is not prompt injection, model manipulation, or adversarial attacks on the model itself. It is the absence of a verifiable, tamper-evident record of what an agent actually did at runtime.”

Most organizations running agents in production cannot answer three questions: what did the agent do, which policies were in effect, and were those policies actually enforced. Observability dashboards record telemetry. They do not produce evidence an auditor can verify independently.

2. Action-level risk, not prompt-level

The dangerous moment is not when the agent receives a prompt. It is when the agent takes an action.

“An agent may receive a well-formed, policy-compliant prompt, generate a reasonable response, and then execute a tool call that deletes production data, transfers funds, or exfiltrates sensitive information. Prompt-level controls do not intercept these actions because the action occurs downstream of the model’s output, at the interface between the agent and its tools.”

Governance that focuses on prompt filtering misses the consequential surface. The action is where the risk materializes.

3. Bypass-by-architecture

Governance implemented as middleware inside the agent’s runtime can be bypassed by the agent itself, by a compromised dependency, or by anyone with access to the agent’s process.

“The governance mechanism and the entity being governed share the same trust boundary. This is analogous to relying on an application to enforce its own access controls without an external authentication service, a pattern that the security community abandoned decades ago.”

This is the structural argument against in-process governance. It applies to Microsoft’s Agent Governance Toolkit, to framework-layer approaches, and to any architecture where the governance mechanism and the governed entity share a runtime.

4. Governance independence

The degree of architectural separation between the agent and its governance layer is the single most important indicator of whether governance will hold.

“The degree of architectural separation between the agent and its governance layer is the single most important indicator of governance effectiveness.”

This is the design principle behind Panopticore: a separate process, a separate trust boundary, deterministic policy enforcement, and evidence that any third party can verify without vendor access.

Why this matters

The NIST submission puts the thesis behind Panopticore on the formal regulatory record. When Cloud Wars published “The Missing Layer in Enterprise AI: Governance and Auditability” two months later, the analyst validation and the policy validation pointed at the same architectural argument.

The full submission is available on our site.

Dave Medeiros
Dave Medeiros
Founder & CEO, Panoptic Systems, Inc.
LinkedIn →

Get new Insights in your inbox.

No spam. Unsubscribe anytime.