Protective Computing
Systems design under human vulnerability

Protective Computing

A systems-engineering discipline focused on software designed for conditions of human vulnerability—crisis, displacement, illness, coercion, and institutional instability. It rejects the Stability Assumption and prioritizes containment, reversibility, and essential utility under degradation.

License: CC-BY 4.0 (Canon) Scope: safety-critical HCI, privacy engineering, resilience Anchor: 10.5281/zenodo.18688516

Problem Statement

Mainstream software is usually built on a Stability Assumption: persistent connectivity, predictable infrastructure, cognitive surplus, environmental safety, and institutional trust. These assumptions fail under crisis, illness, displacement, coercion, and systemic instability.

Protective Computing formalizes a missing systems layer for environments where stability cannot be assumed.

Enforcement Pipeline

Stage 1: Metadata / Sitemap / Robots Stage 2: Completeness (no semantic blanks) Stage 3: Verification hardness (no weak methods)
  • Each stage is executed in CI on push and pull request.
  • Stage 3 fails if WEAK_VERIFICATION_COUNT > 0.
  • Current baseline: WEAK_VERIFICATION_COUNT=0.
  • Normative claims are tied to executable verification procedures and audited artifacts.
Proof of Enforcement (snapshot)
WEAK_VERIFICATION_COUNT=0 · Stage 3 gate passed
Verification status is continuously recomputed from the published ledger and fails CI on regression.

Evidence: CI workflow, semantic gate script, audit artifacts.

What Makes It Different

Conventional Model Protective Model
Optimize retention and engagement Optimize essential task completion under stress
Assume stability and persistent connectivity Assume degradation and adversarial conditions
Centralize control and data authority Preserve local authority and offline agency
Feature growth as default success signal Essential utility with reversibility constraints

Published Artifacts

Foundation

Practice

Tooling (Forthcoming)

  • Protective Legitimacy Score — Operational Rubric v1.0
  • Protective Design Patterns v0.1
  • Implementation Companion Guide
  • Threat Modeling Templates for Vulnerability Contexts

Core Principles

Reversibility
User actions and system changes remain undoable; failures don’t become permanent harm.
Exposure Minimization
Reduce data surface by default; collect only what is essential, locally, with intent.
Local Authority
The user retains control even offline; autonomy survives institutional delay.
🛡️
Coercion Resistance
Design for hostile contexts: pressure, surveillance, forced disclosure, compromised devices.
Degraded Functionality
Operate under battery scarcity, network collapse, cognitive fatigue, and time pressure.
Essential Utility
Optimize for survival tasks, not engagement metrics; prioritize the critical path.