A systems-engineering discipline focused on software designed for conditions of
human vulnerability—crisis, displacement, illness, coercion, and institutional instability.
It rejects the Stability Assumption and prioritizes containment, reversibility, and essential utility under degradation.
Mainstream software is usually built on a Stability Assumption: persistent connectivity, predictable infrastructure,
cognitive surplus, environmental safety, and institutional trust. These assumptions fail under crisis, illness,
displacement, coercion, and systemic instability.
When stability assumptions fail, engagement-first systems produce lockout, disclosure, and irreversible harm.
These are structural failures, not edge cases, in high-risk operating environments.
Protective Computing replaces this model with enforceable constraints designed for degraded and adversarial conditions.
Protective Computing formalizes a missing systems layer for environments where stability cannot be assumed.
Enforcement Pipeline
Stage 1: Metadata / Sitemap / Robots→Stage 2: Completeness (no semantic blanks)→Stage 3: Verification hardness (no weak methods)
Each stage is executed in CI on push and pull request.
Stage 3 fails if WEAK_VERIFICATION_COUNT > 0.
Current baseline: WEAK_VERIFICATION_COUNT=0.
Normative claims are tied to executable verification procedures and audited artifacts.
Proof of Enforcement (snapshot) WEAK_VERIFICATION_COUNT=0 · Stage 3 gate passed
Verification status is continuously recomputed from the published ledger and fails CI on regression.