These crosswalks position Protective Computing alongside recognized frameworks without overstating equivalence. They are translation aids for architects, reviewers, and buyers, not certifications and not legal conclusions.
Claim boundary: Protective Computing does not replace NIST, ISO, SOC 2, or OWASP. It provides a human-vulnerability design layer that turns coercion, degradation, local authority, and bounded disclosure into auditable engineering requirements those frameworks can consume.
| Framework | What it already covers well | Protective Computing delta | Current repository anchors | Still needed |
|---|---|---|---|---|
| NIST Privacy Framework | Privacy risk identification, governance, data processing functions, and privacy-by-design language. | Sharpens failure analysis around coercion, offline survivability, retention-by-default, and operator non-possession of plaintext. | Compliance matrix, field ledger, retention policy table | Function-by-function mapping to Identify, Govern, Control, Communicate, and Protect. |
| NIST AI RMF | Govern, Map, Measure, and Manage structure for risk governance and operational monitoring. | Adds a concrete model for how AI-assisted or sensitive systems should behave under human vulnerability, institutional pressure, and degraded infrastructure. | Specification, threat models, audit checklist | AI-specific examples showing how Protective Computing constraints alter system prompts, retention, and operator review paths. |
| ISO/IEC 27001 | Information security management systems, control governance, asset risk treatment, and availability/security discipline. | Defines user-protective outcomes that a security program can target: no master keys, reversible actions, local authority, bounded export, and deterministic degradation. | Audit evidence index, reversibility boundary table, local authority profile | Annex A style control mapping for each Protective principle. |
| ISO/IEC 42001 | AI management system governance, accountability, oversight, and lifecycle management. | Supplies the missing human-instability lens for deployment contexts where AI systems influence care, records, triage, or user-facing decision support. | Specification, boundary page, independent review | Lifecycle examples for AI-assisted systems with coercion and degraded-mode requirements. |
| SOC 2 Trust Services Criteria | Security, availability, confidentiality, processing integrity, and privacy assurance categories buyers already recognize. | Translates those broad categories into product-level failure tests for essential-path continuity, retention minimization, and disclosure boundaries under stress. | Compliance matrix, audit path, PainTracker packet | A buyer-facing assurance narrative that maps evidence packets to the Trust Services Criteria. |
| OWASP ASVS | Application security verification requirements for authentication, access control, cryptography, and data protection. | Extends secure-by-design posture into human-risk behavior: deniability gaps, coercion boundaries, telemetry minimization, and degraded-path usability. | PainTracker packet, coercion boundary matrix, metadata retention policy | A requirement-by-requirement partial mapping for the reference implementation. |
The high-level matrix is now backed by framework-specific annexes with explicit control translations and evidence hooks.