Protective Computing
A systems-engineering discipline for software built under conditions of human vulnerability—crisis, illness, coercion, displacement, and institutional instability. It rejects the Stability Assumption and replaces it with enforceable constraints: reversibility, exposure minimization, local authority, degraded functionality, coercion resistance, and essential utility.
Start Here
Adopt the mindset
Get the practical framing, vocabulary, and where Protective Computing fits in your engineering process.
Use the constraints
Read the normative spec (MUST / SHOULD / MUST NOT) and apply it in design reviews and audits.
Strengthen the canon
Contribute independent review with concrete verification procedures and evidence expectations.
Audit and Verification
Audit this site
Single-page reviewer entry point with the repo URL, exact audit commands, evidence locations, and expected pass signals.
Read the canonical spec
Normative MUST / SHOULD / MUST NOT requirements with concrete verification procedures and audit anchors.
Use the review checklist
Independent review path for external critique, reproducible testing, and evidence expectations.
Entry Pages
Stability Assumption (Stability Bias)
What stability bias is, why it fails under crisis, and the protective constraints that replace it.
Offline-first health architecture
A minimal, protective architecture for sensitive health apps: local authority, exposure minimization, and safe sync.
Trauma-informed software patterns
Practical patterns for safety under stress: reversible actions, reduced disclosure, and deliberate degradation.
Protective Computing Audit Checklist
A structured checklist for auditing software against all six Protective Computing principles, from coercion risk to degraded-mode utility.
What software gets wrong about instability
The seven structural failures of software under degraded conditions, from connectivity assumptions to coercion blindness.
Designing for coercion resistance
How to audit software for coercion risk and design systems that protect users under institutional threat and forced disclosure.
Local authority vs cloud dependence
Why cloud-first architecture fails vulnerable users, and how local-authority design preserves agency under degraded or adversarial conditions.
Degraded-mode UX patterns
UX patterns and a practical checklist for software that must function under low battery, weak connectivity, and resource constraints.
Problem Statement
Mainstream software is usually built on a Stability Assumption: persistent connectivity, predictable infrastructure, cognitive surplus, environmental safety, and institutional trust. These assumptions fail under crisis, illness, displacement, coercion, and systemic instability.
- When stability assumptions fail, engagement-first systems produce lockout, disclosure, and irreversible harm.
- These are structural failures, not edge cases, in high-risk operating environments.
- Protective Computing replaces this model with enforceable constraints designed for degraded and adversarial conditions.
Protective Computing formalizes a missing systems layer for environments where stability cannot be assumed.
Enforcement Pipeline
- Each stage is executed in CI on push and pull request.
- Stage 3 fails if
WEAK_VERIFICATION_COUNT > 0. - Current baseline:
WEAK_VERIFICATION_COUNT=0. - Normative claims are tied to executable verification procedures and audited artifacts.
WEAK_VERIFICATION_COUNT=0 · Stage 3 gate passedVerification status is continuously recomputed from the published ledger and fails CI on regression.
Evidence: audit this site, CI workflow, semantic gate script, audit artifacts.
What Makes It Different
| Conventional Model | Protective Model |
|---|---|
| Optimize retention and engagement | Optimize essential task completion under stress |
| Assume stability and persistent connectivity | Assume degradation and adversarial conditions |
| Centralize control and data authority | Preserve local authority and offline agency |
| Feature growth as default success signal | Essential utility with reversibility constraints |
Published Artifacts
Canonical Paper
- Protective Computing Canon v1.0
The structural overview of the discipline defining the theory, engineering practices, evaluation framework, and reference implementation.
DOI: 10.5281/zenodo.18887610
Foundation
- The Overton Framework v1.3
Formal specification of Protective Computing principles
Practice
- Field Guide v0.1
Operational companion for systems engineers
Specification
- Protective Computing Specification v1.0
Normative requirements with verification procedures - MUST Justifications Ledger
Rationale, threat tags, and status taxonomy for MUSTs
Tooling
- Protective Legitimacy Score — Operational Rubric v1.0 (PDF)
DOI: 10.5281/zenodo.18783432 - Protective Design Patterns v0.1 (draft)
- Threat Modeling Templates (draft)
Full research corpus available on Zenodo: Protective Computing Community Archive