Skip to main content
Panopticore
Standards Engagement

Comment of Panoptic Systems on NIST AI Agent Security RFI

A formal submission to the National Institute of Standards and Technology on Security Considerations for Artificial Intelligence Agents (Docket NIST-2025-0035, 91 FR 698). February 2026.

Why this matters

The NIST submission articulates the architectural thesis behind Panopticore in formal regulatory language. It defines the category problem (the evidence gap) and the architectural answer (governance independence) in terms regulators, auditors, and security professionals already understand. It explains why the missing layer in enterprise AI is governance, not better models.

The four arguments

The evidence gap

“The most consequential security gap in deployed AI agent systems today is not prompt injection, model manipulation, or adversarial attacks on the model itself. It is the absence of a verifiable, tamper-evident record of what an agent actually did at runtime.”

— Comment of Panoptic Systems, NIST RFI on AI Agent Security, February 2026

Action-level vs prompt-level risk

“An agent may receive a well-formed, policy-compliant prompt, generate a reasonable response, and then execute a tool call that deletes production data, transfers funds, or exfiltrates sensitive information. Prompt-level controls do not intercept these actions because the action occurs downstream of the model's output, at the interface between the agent and its tools.”

— Comment of Panoptic Systems, NIST RFI on AI Agent Security, February 2026

Bypass-by-architecture

“Many current agent governance approaches are implemented as middleware within the agent's own runtime. This means a sufficiently capable agent, a compromised dependency, or a malicious actor with access to the agent's process can bypass governance entirely. The governance mechanism and the entity being governed share the same trust boundary. This is analogous to relying on an application to enforce its own access controls without an external authentication service, a pattern that the security community abandoned decades ago.”

— Comment of Panoptic Systems, NIST RFI on AI Agent Security, February 2026

Governance independence as the key indicator

“The degree of architectural separation between the agent and its governance layer is the single most important indicator of governance effectiveness.”

— Comment of Panoptic Systems, NIST RFI on AI Agent Security, February 2026

Recommendations to NIST

  • Standardized evidence formats for AI agent governance, analogous to NVD/CVE for vulnerabilities
  • Explicit runtime governance guidance in AI RMF, addressing the gap where current guidance focuses on pre-deployment while the most consequential risks materialize at runtime
  • Adaptation of NIST SP 800-207 zero trust principles to agent identity and action enforcement
  • Application of supply chain integrity patterns (SLSA, in-toto, Sigstore-style hash chaining) to agent action records

Full document

The full comment is available as a PDF download. Reference: Docket NIST-2025-0035, 91 FR 698, submitted February 2026.

Author: David Medeiros, Founder, Panoptic Systems