Protective Computing

A systems-engineering discipline for software built under conditions of human vulnerability—crisis, illness, coercion, displacement, and institutional instability. It rejects the Stability Assumption and replaces it with enforceable constraints: reversibility, exposure minimization, local authority, degraded functionality, coercion resistance, and essential utility.

License: CC-BY 4.0 (Canon) Scope: safety-critical HCI, privacy engineering, resilience Anchor: 10.5281/zenodo.18887610 Theory / Overton Framework: 10.5281/zenodo.18688516

Start Here

Step 1

Adopt the mindset

Get the practical framing, vocabulary, and where Protective Computing fits in your engineering process.

Step 2

Use the constraints

Read the normative spec (MUST / SHOULD / MUST NOT) and apply it in design reviews and audits.

Step 3

Strengthen the canon

Contribute independent review with concrete verification procedures and evidence expectations.

Audit and Verification

Verify 1

Audit this site

Single-page reviewer entry point with the repo URL, exact audit commands, evidence locations, and expected pass signals.

Verify 2

Read the canonical spec

Normative MUST / SHOULD / MUST NOT requirements with concrete verification procedures and audit anchors.

Verify 3

Use the review checklist

Independent review path for external critique, reproducible testing, and evidence expectations.

Entry Pages

Entry 1

Stability Assumption (Stability Bias)

What stability bias is, why it fails under crisis, and the protective constraints that replace it.

Entry 2

Offline-first health architecture

A minimal, protective architecture for sensitive health apps: local authority, exposure minimization, and safe sync.

Entry 3

Trauma-informed software patterns

Practical patterns for safety under stress: reversible actions, reduced disclosure, and deliberate degradation.

Entry 4

Protective Computing Audit Checklist

A structured checklist for auditing software against all six Protective Computing principles, from coercion risk to degraded-mode utility.

Entry 5

What software gets wrong about instability

The seven structural failures of software under degraded conditions, from connectivity assumptions to coercion blindness.

Entry 6

Designing for coercion resistance

How to audit software for coercion risk and design systems that protect users under institutional threat and forced disclosure.

Entry 7

Local authority vs cloud dependence

Why cloud-first architecture fails vulnerable users, and how local-authority design preserves agency under degraded or adversarial conditions.

Entry 8

Degraded-mode UX patterns

UX patterns and a practical checklist for software that must function under low battery, weak connectivity, and resource constraints.

Problem Statement

Mainstream software is usually built on a Stability Assumption: persistent connectivity, predictable infrastructure, cognitive surplus, environmental safety, and institutional trust. These assumptions fail under crisis, illness, displacement, coercion, and systemic instability.

  • When stability assumptions fail, engagement-first systems produce lockout, disclosure, and irreversible harm.
  • These are structural failures, not edge cases, in high-risk operating environments.
  • Protective Computing replaces this model with enforceable constraints designed for degraded and adversarial conditions.

Protective Computing formalizes a missing systems layer for environments where stability cannot be assumed.

Enforcement Pipeline

Stage 1: Metadata / Sitemap / Robots Stage 2: Completeness (no semantic blanks) Stage 3: Verification hardness (no weak methods)
  • Each stage is executed in CI on push and pull request.
  • Stage 3 fails if WEAK_VERIFICATION_COUNT > 0.
  • Current baseline: WEAK_VERIFICATION_COUNT=0.
  • Normative claims are tied to executable verification procedures and audited artifacts.
Proof of Enforcement (snapshot)
WEAK_VERIFICATION_COUNT=0 · Stage 3 gate passed
Verification status is continuously recomputed from the published ledger and fails CI on regression.

Evidence: audit this site, CI workflow, semantic gate script, audit artifacts.

What Makes It Different

Conventional Model Protective Model
Optimize retention and engagement Optimize essential task completion under stress
Assume stability and persistent connectivity Assume degradation and adversarial conditions
Centralize control and data authority Preserve local authority and offline agency
Feature growth as default success signal Essential utility with reversibility constraints

Published Artifacts

Canonical Paper

  • Protective Computing Canon v1.0
    The structural overview of the discipline defining the theory, engineering practices, evaluation framework, and reference implementation.
    DOI: 10.5281/zenodo.18887610

Foundation

Practice

Specification

Tooling

Full research corpus available on Zenodo: Protective Computing Community Archive

Core Principles