Standards Alignment
WCP is designed to satisfy the agent governance requirements of major AI security frameworks, regulations, and standards bodies. This page documents how WCP's namespaces map to each.
NIST AI Agent Standards Initiative (CAISI)
Launched February 17, 2026. NIST explicitly identified three requirements for trustworthy AI agents: authenticated identity, scoped permissions, and logged activity. WCP was designed to address all three.
nist.gov/caisi ↗| NIST Requirement | WCP Mechanism | Namespace |
|---|---|---|
| Agent authentication | Registered, authenticated worker identity | wrk.* |
| Permission scoping | Machine-readable capability declarations | cap.* |
| Permission scoping | Versioned operator-defined authorization policies | pol.* |
| Activity logging and auditing | Tamper-evident event stream of all agent actions | evt.* |
EU AI Act — Articles 9 and 12
The EU AI Act mandates risk management systems (Art. 9) and automatic logging of events for high-risk AI systems (Art. 12). WCP's event system is a direct technical implementation of Art. 12's requirements.
artificialintelligenceact.eu ↗| EU AI Act Requirement | WCP Mechanism | Namespace |
|---|---|---|
| Art. 9 — Risk management system | Policy profiles defining authorized capabilities and deny rules | pol.*, ctrl.* |
| Art. 12 — Automatic logging of events | Tamper-evident event stream with dispatch, deny, and steward events | evt.* |
| Art. 13 — Transparency and information provision | Capability declarations expose agent permissions in machine-readable form | cap.* |
evt.* event stream satisfies Art. 12's audit trail requirement out of the box.
OWASP Top 10 for LLM Applications
The OWASP LLM Top 10 is the security baseline for AI applications. WCP directly mitigates the top two risks facing agentic systems: excessive agency and insecure plugin/tool design.
owasp.org ↗| OWASP Risk | WCP Mitigation | Namespace |
|---|---|---|
| LLM06 — Excessive Agency | Least-privilege capability declarations. Agents declare exactly what they need; router denies everything else by default. | cap.*, pol.* |
| LLM08 — Overreliance | Mandatory blast radius scoring before dispatch; STEWARD_HOLD outcome routes high-risk decisions to human review. | Router decisions |
| LLM09 — Misinformation | Evidence receipts and audit trail tie every output to the agent, capability, and policy that authorized it. | evt.* |
NIST SP 800-53 — Security and Privacy Controls
The US federal security controls catalog used by all federal agencies and most large enterprises. WCP's ctrl.* namespace maps directly to SP 800-53 control IDs, allowing compliance teams to adopt WCP without translation work.
| SP 800-53 Control | WCP Mapping | WCP Control ID |
|---|---|---|
| AC-2 — Account Management | Worker registration and identity lifecycle | ctrl.identity.secrets_deny_default |
| AC-6 — Least Privilege | Capability declarations enforce minimum necessary permissions | cap.* namespace |
| AU-2 — Event Logging | Mandatory telemetry: dispatch, deny, steward events | evt.* namespace |
| AU-9 — Protection of Audit Information | Tamper-evident event stream; evidence receipts | evt.* namespace |
| CM-7 — Least Functionality | Fail-closed router: deny by default unless capability explicitly declared | Router default |
ISO/IEC 42001 — AI Management Systems
The first international certifiable AI management standard. 38 distinct controls covering AI governance, risk management, data governance, and system lifecycle. Required by enterprise procurement in regulated industries. 76% of organizations plan to pursue ISO 42001 certification (CSA 2025).
iso.org/standard/42001 ↗| ISO 42001 Requirement | WCP Mechanism | Namespace |
|---|---|---|
| AI risk management controls | Policy profiles defining authorized capabilities and deny rules | pol.*, ctrl.* |
| Continuous monitoring and improvement | Tamper-evident event stream with full decision audit trail | evt.* |
| AI system lifecycle management | Versioned policy profiles and capability declarations | pol.*, cap.* |
| Third-party and supplier oversight | Worker identity and capability auditing per worker | wrk.*, cap.* |
NIST AI Risk Management Framework (AI RMF 1.0)
The US voluntary AI risk framework referenced by federal agencies, the CFPB, FDA, SEC, FTC, and EEOC. WCP maps to all four RMF functions: Govern, Map, Measure, and Manage.
nist.gov/itl/ai-risk-management-framework ↗| RMF Function | WCP Mechanism | Namespace |
|---|---|---|
| GOVERN — Policies and accountability structures | Versioned operator-defined policy profiles | pol.* |
| MAP — Capability catalogs and threat identification | Machine-readable capability declarations per worker | cap.* |
| MEASURE — Performance and bias assessment | Blast radius scoring, dry-run mode, capability envelope testing | Router decisions |
| MANAGE — Incident response and continuous monitoring | Tamper-evident event stream; STEWARD_HOLD for human escalation | evt.* |
MITRE ATLAS — Adversarial Threat Landscape for AI Systems
The only threat matrix designed specifically for AI/ML systems. The October 2025 update added 14 new techniques for agentic AI. WCP's capability gating and audit trail directly counter the most exploited ATLAS techniques.
atlas.mitre.org ↗| ATLAS Threat | WCP Mitigation | Namespace |
|---|---|---|
| AI Agent Context Poisoning | Input validation enforced at capability boundary before dispatch | cap.* |
| Memory Manipulation | Worker identity integrity checks; tamper-evident event log detects post-hoc modification | wrk.*, evt.* |
| Prompt Injection / Tool Misuse | Policy-based capability gating — agent cannot invoke capabilities not declared in policy | pol.* |
| Model Extraction / IP Theft | Least-privilege capability declarations; deny by default | cap.*, pol.* |
Singapore Model AI Governance Framework for Agentic AI
The first governance framework designed specifically for agentic AI — agents making autonomous decisions. Developed by Singapore's IMDA in collaboration with OpenAI, Google, Microsoft, Anthropic, and 70+ other organizations. Emerging regional standard for Asia-Pacific.
imda.gov.sg ↗| MGF Principle | WCP Mechanism | Namespace |
|---|---|---|
| Human oversight of autonomous decisions | STEWARD_HOLD outcome routes high-risk decisions to human review | Router decisions |
| Explainability of agent actions | Evidence receipts with full decision provenance | evt.* |
| Accountability and attribution | Worker identity tied to every dispatched action | wrk.* |
| Scoped authorization | Capability declarations enforce least-privilege per agentic task | cap.*, pol.* |
UK NCSC + CISA — Guidelines for Secure AI System Development
The first international joint government guidance on AI security, endorsed by 21 countries including the US, UK, Australia, Canada, and Germany. CISA's 2025 guidance on AI in Operational Technology expands coverage to critical infrastructure. Both documents map directly to WCP's four-phase lifecycle.
ncsc.gov.uk ↗| Guidance Phase | WCP Mechanism | Namespace |
|---|---|---|
| Secure Design — capability-based authorization | Least-privilege capability declarations per worker | cap.* |
| Secure Development — versioned governance | Versioned policy profiles with schema validation | pol.* |
| Secure Deployment — worker identity | Registered, authenticated worker identity lifecycle | wrk.* |
| Secure Operation — audit and incident response | Tamper-evident event stream; evidence receipts for incident reconstruction | evt.* |
SOC 2 Type II — Trust Services Criteria
Required by enterprise customers for B2B SaaS and cloud services. AI systems face elevated SOC 2 scrutiny: 95%+ traceability for model outputs, immutable logging, and monthly audit cycles are increasingly expected. WCP's event stream satisfies these requirements architecturally.
aicpa.org ↗| SOC 2 Criteria | WCP Mechanism | Namespace |
|---|---|---|
| CC6.1 — Logical and physical access controls | Worker identity + capability-based access control | wrk.*, cap.* |
| CC7.2 — System monitoring | Tamper-evident event stream covering all agent decisions | evt.* |
| Processing integrity — completeness and accuracy | Evidence receipts tie every output to the authorizing policy and worker | evt.* |
| Availability — fail-safe behavior | Fail-closed router: deny by default if policy unavailable | Router default |
GDPR — Articles 13, 15, and 22 (Automated Decision-Making)
GDPR's automated decision-making provisions (Art. 22) require explainability for decisions made without human involvement — exactly the scenario WCP governs. WCP's audit trail provides the evidence trail needed to satisfy data subject rights requests and DPIA obligations.
gdpr.eu ↗| GDPR Requirement | WCP Mechanism | Namespace |
|---|---|---|
| Art. 22 — Right to explanation for automated decisions | Evidence receipts provide full decision provenance: worker, capability, policy, outcome | evt.* |
| Art. 15 — Right of access (DPIA support) | Immutable audit log enables reconstruction of any agent decision for any time range | evt.* |
| Art. 5 — Lawfulness and purpose limitation | Capability declarations scope agents to declared purposes; out-of-scope requests denied | cap.*, pol.* |
evt.* namespace to satisfy audit trail requirements under both regimes simultaneously.
Forward to your team
Your organization is deploying AI agents. Here's the open governance protocol that maps to the standards your compliance, security, and legal teams already know — NIST, EU AI Act, OWASP, ISO 42001, MITRE ATLAS, SOC 2, GDPR, and more.
More alignments coming
Additional mappings in progress: HIPAA, PCI DSS, FedRAMP, IEEE 7000 series, OECD AI Principles, and financial services AI regulations. Contributions welcome.
If you've mapped WCP to a standard not listed here, open a pull request or issue.
Open a GitHub Issue →