Stability Assumption: Why “Normal Operating Conditions” Fail
Most software is quietly built on a Stability Assumption: that the world is predictable enough that “normal operating conditions” hold most of the time. Protective Computing treats that as a design error.
Most software fails in the real world for a simple reason: it assumes the world stays normal. When the network drops, when attention is scarce, or when safety disappears, the product stops being helpful and starts producing harm — lockout, forced disclosure, and irreversible mistakes. Protective Computing is the discipline of designing for those moments on purpose.
Definition
The Stability Assumption is the belief (usually implicit) that users have reliable connectivity, safe environments, uninterrupted attention, stable institutions, and time to recover from mistakes. A related failure mode is stability bias: teams treat instability as an edge case and postpone protective behavior to “later” — until the product becomes too coupled to reverse.
Why It Fails
Under crisis, coercion, illness, displacement, or institutional instability, software that optimizes for engagement and growth can produce irreversible harm: lockout, forced disclosure, account seizure, and data loss. These aren’t rare corner cases for many populations — they’re predictable operating conditions.
- Connectivity collapses: outages, throttling, air-gapped environments, roaming costs.
- Attention collapses: cognitive fatigue, panic, sleep deprivation, medical stress.
- Safety collapses: coercion, surveillance, device seizure, forced unlock.
- Institutional trust collapses: arbitrary moderation, hostile HR/legal, shifting policy regimes.
Protective Replacement: Enforceable Constraints
Protective Computing replaces “assume stability” with explicit constraints you can test and audit:
- Reversibility: destructive actions remain undoable; failure doesn’t become permanent.
- Exposure Minimization: collect less, retain less, reveal less by default.
- Local Authority: preserve user control even offline and under delay.
- Degraded Functionality: define acceptable behavior under scarcity.
- Coercion Resistance: design for hostile contexts and forced disclosure.
- Essential Utility: optimize for survival tasks and critical paths, not retention.
Verification Mindset
Treat stability as a hypothesis, not a fact. Write down what “degradation” means for your product (battery low, no network, hostile device access) and attach verification procedures that would fail if the system regresses.
The Protective Computing spec is written to support this: requirements are normative and paired with evidence expectations. Start here: Specification v1.0.