NIST February 2026

NIST AI Agent Standards Initiative (CAISI)

Launched February 17, 2026. NIST explicitly identified three requirements for trustworthy AI agents: authenticated identity, scoped permissions, and logged activity. WCP was designed to address all three.

nist.gov/caisi ↗
NIST Requirement WCP Mechanism Namespace
Agent authentication Registered, authenticated worker identity wrk.*
Permission scoping Machine-readable capability declarations cap.*
Permission scoping Versioned operator-defined authorization policies pol.*
Activity logging and auditing Tamper-evident event stream of all agent actions evt.*
WCP aligns with NIST Pillar 2 — community-led open-source protocol development for AI agent security. FΔFΌ★LΔB submitted a public comment to the CAISI security track on March 3, 2026.
EU Law In force 2024 — High-risk compliance: August 2, 2026

EU AI Act — Articles 9 and 12

The EU AI Act mandates risk management systems (Art. 9) and automatic logging of events for high-risk AI systems (Art. 12). WCP's event system is a direct technical implementation of Art. 12's requirements.

artificialintelligenceact.eu ↗
EU AI Act Requirement WCP Mechanism Namespace
Art. 9 — Risk management system Policy profiles defining authorized capabilities and deny rules pol.*, ctrl.*
Art. 12 — Automatic logging of events Tamper-evident event stream with dispatch, deny, and steward events evt.*
Art. 13 — Transparency and information provision Capability declarations expose agent permissions in machine-readable form cap.*
High-risk AI providers must comply by August 2, 2026. WCP's evt.* event stream satisfies Art. 12's audit trail requirement out of the box.
OWASP 2025

OWASP Top 10 for LLM Applications

The OWASP LLM Top 10 is the security baseline for AI applications. WCP directly mitigates the top two risks facing agentic systems: excessive agency and insecure plugin/tool design.

owasp.org ↗
OWASP Risk WCP Mitigation Namespace
LLM06 — Excessive Agency Least-privilege capability declarations. Agents declare exactly what they need; router denies everything else by default. cap.*, pol.*
LLM08 — Overreliance Mandatory blast radius scoring before dispatch; STEWARD_HOLD outcome routes high-risk decisions to human review. Router decisions
LLM09 — Misinformation Evidence receipts and audit trail tie every output to the agent, capability, and policy that authorized it. evt.*
NIST SP 800-53 Rev 5

NIST SP 800-53 — Security and Privacy Controls

The US federal security controls catalog used by all federal agencies and most large enterprises. WCP's ctrl.* namespace maps directly to SP 800-53 control IDs, allowing compliance teams to adopt WCP without translation work.

csrc.nist.gov ↗
SP 800-53 Control WCP Mapping WCP Control ID
AC-2 — Account Management Worker registration and identity lifecycle ctrl.identity.secrets_deny_default
AC-6 — Least Privilege Capability declarations enforce minimum necessary permissions cap.* namespace
AU-2 — Event Logging Mandatory telemetry: dispatch, deny, steward events evt.* namespace
AU-9 — Protection of Audit Information Tamper-evident event stream; evidence receipts evt.* namespace
CM-7 — Least Functionality Fail-closed router: deny by default unless capability explicitly declared Router default
ISO/IEC Certified December 2023

ISO/IEC 42001 — AI Management Systems

The first international certifiable AI management standard. 38 distinct controls covering AI governance, risk management, data governance, and system lifecycle. Required by enterprise procurement in regulated industries. 76% of organizations plan to pursue ISO 42001 certification (CSA 2025).

iso.org/standard/42001 ↗
ISO 42001 RequirementWCP MechanismNamespace
AI risk management controlsPolicy profiles defining authorized capabilities and deny rulespol.*, ctrl.*
Continuous monitoring and improvementTamper-evident event stream with full decision audit trailevt.*
AI system lifecycle managementVersioned policy profiles and capability declarationspol.*, cap.*
Third-party and supplier oversightWorker identity and capability auditing per workerwrk.*, cap.*
NIST AI RMF 1.0 — Generative AI Profile NIST-AI-600-1 (July 2024)

NIST AI Risk Management Framework (AI RMF 1.0)

The US voluntary AI risk framework referenced by federal agencies, the CFPB, FDA, SEC, FTC, and EEOC. WCP maps to all four RMF functions: Govern, Map, Measure, and Manage.

nist.gov/itl/ai-risk-management-framework ↗
RMF FunctionWCP MechanismNamespace
GOVERN — Policies and accountability structuresVersioned operator-defined policy profilespol.*
MAP — Capability catalogs and threat identificationMachine-readable capability declarations per workercap.*
MEASURE — Performance and bias assessmentBlast radius scoring, dry-run mode, capability envelope testingRouter decisions
MANAGE — Incident response and continuous monitoringTamper-evident event stream; STEWARD_HOLD for human escalationevt.*
MITRE Updated October 2025 — 15 tactics, 66 techniques

MITRE ATLAS — Adversarial Threat Landscape for AI Systems

The only threat matrix designed specifically for AI/ML systems. The October 2025 update added 14 new techniques for agentic AI. WCP's capability gating and audit trail directly counter the most exploited ATLAS techniques.

atlas.mitre.org ↗
ATLAS ThreatWCP MitigationNamespace
AI Agent Context PoisoningInput validation enforced at capability boundary before dispatchcap.*
Memory ManipulationWorker identity integrity checks; tamper-evident event log detects post-hoc modificationwrk.*, evt.*
Prompt Injection / Tool MisusePolicy-based capability gating — agent cannot invoke capabilities not declared in policypol.*
Model Extraction / IP TheftLeast-privilege capability declarations; deny by defaultcap.*, pol.*
IMDA February 2026 — Developed with 70+ global organizations

Singapore Model AI Governance Framework for Agentic AI

The first governance framework designed specifically for agentic AI — agents making autonomous decisions. Developed by Singapore's IMDA in collaboration with OpenAI, Google, Microsoft, Anthropic, and 70+ other organizations. Emerging regional standard for Asia-Pacific.

imda.gov.sg ↗
MGF PrincipleWCP MechanismNamespace
Human oversight of autonomous decisionsSTEWARD_HOLD outcome routes high-risk decisions to human reviewRouter decisions
Explainability of agent actionsEvidence receipts with full decision provenanceevt.*
Accountability and attributionWorker identity tied to every dispatched actionwrk.*
Scoped authorizationCapability declarations enforce least-privilege per agentic taskcap.*, pol.*
NCSC / CISA Joint guidance — 21 countries

UK NCSC + CISA — Guidelines for Secure AI System Development

The first international joint government guidance on AI security, endorsed by 21 countries including the US, UK, Australia, Canada, and Germany. CISA's 2025 guidance on AI in Operational Technology expands coverage to critical infrastructure. Both documents map directly to WCP's four-phase lifecycle.

ncsc.gov.uk ↗
Guidance PhaseWCP MechanismNamespace
Secure Design — capability-based authorizationLeast-privilege capability declarations per workercap.*
Secure Development — versioned governanceVersioned policy profiles with schema validationpol.*
Secure Deployment — worker identityRegistered, authenticated worker identity lifecyclewrk.*
Secure Operation — audit and incident responseTamper-evident event stream; evidence receipts for incident reconstructionevt.*
AICPA SOC 2 Type II — Enterprise audit standard

SOC 2 Type II — Trust Services Criteria

Required by enterprise customers for B2B SaaS and cloud services. AI systems face elevated SOC 2 scrutiny: 95%+ traceability for model outputs, immutable logging, and monthly audit cycles are increasingly expected. WCP's event stream satisfies these requirements architecturally.

aicpa.org ↗
SOC 2 CriteriaWCP MechanismNamespace
CC6.1 — Logical and physical access controlsWorker identity + capability-based access controlwrk.*, cap.*
CC7.2 — System monitoringTamper-evident event stream covering all agent decisionsevt.*
Processing integrity — completeness and accuracyEvidence receipts tie every output to the authorizing policy and workerevt.*
Availability — fail-safe behaviorFail-closed router: deny by default if policy unavailableRouter default
EU Law Active — AI enforcement expanding 2025+

GDPR — Articles 13, 15, and 22 (Automated Decision-Making)

GDPR's automated decision-making provisions (Art. 22) require explainability for decisions made without human involvement — exactly the scenario WCP governs. WCP's audit trail provides the evidence trail needed to satisfy data subject rights requests and DPIA obligations.

gdpr.eu ↗
GDPR RequirementWCP MechanismNamespace
Art. 22 — Right to explanation for automated decisionsEvidence receipts provide full decision provenance: worker, capability, policy, outcomeevt.*
Art. 15 — Right of access (DPIA support)Immutable audit log enables reconstruction of any agent decision for any time rangeevt.*
Art. 5 — Lawfulness and purpose limitationCapability declarations scope agents to declared purposes; out-of-scope requests deniedcap.*, pol.*
GDPR enforcement increasingly intersects with the EU AI Act. Organizations subject to both can use WCP's evt.* namespace to satisfy audit trail requirements under both regimes simultaneously.

Forward to your team

Your organization is deploying AI agents. Here's the open governance protocol that maps to the standards your compliance, security, and legal teams already know — NIST, EU AI Act, OWASP, ISO 42001, MITRE ATLAS, SOC 2, GDPR, and more.

More alignments coming

Additional mappings in progress: HIPAA, PCI DSS, FedRAMP, IEEE 7000 series, OECD AI Principles, and financial services AI regulations. Contributions welcome.

If you've mapped WCP to a standard not listed here, open a pull request or issue.

Open a GitHub Issue →