Protective Computing adds a human-instability layer to AI governance by requiring negative claims, vulnerability-aware threat boundaries, and release-bound evidence.
| Protective control | ISO 42001 concern | Translation | Evidence |
|---|---|---|---|
| Boundary clarity | AI system scope and intended use | Requires explicit publication of what the system is not safe for. | boundary page, reference packet |
| Human vulnerability controls | Lifecycle risk treatment | Adds coercion, degraded infrastructure, and institutional pressure as first-class deployment risks. | threat models |