Mainstream software is engineered for stable conditions. When those conditions disappear, the design fails — often at the worst possible moment.
Most software is built under a hidden assumption: that conditions are predictable. The developer has a reliable laptop, a fast connection, a charged battery, and cognitive surplus. Those conditions are encoded into every architectural decision, whether the developer intends it or not.
Protective Computing calls this the Stability Assumption. It is not a bug. It is the default design mode of the industry. And it fails structurally when users do not share those conditions.
Cloud-first and API-first architectures treat the network as a guaranteed substrate. Authentication calls a remote server. Data is never written locally. Sync is a background process, not a core design concern.
When the network disappears — due to geography, censorship, infrastructure failure, or carrier throttling — these systems go dark. Not partially. Completely. The user is locked out of their own data, in some cases permanently, because the decryption key lives on a server they can no longer reach.
The fix: Local Authority — design for offline operation first. Network sync is an optimization, not a requirement.
UX research is typically conducted with engaged, rested participants who have agreed to be studied. Real users in crisis are cognitively depleted. They are in pain, under threat, sleep-deprived, or managing multiple simultaneous emergencies.
Software that requires users to navigate complex flows, parse legal-language consent dialogs, or remember account credentials under these conditions will fail them. The failure looks like "user error." It is system error.
The fix: Essential Utility — identify the critical path, strip everything that competes with it, and minimize cognitive load on the path that matters most.
Many systems treat deletion, submission, and sharing as terminal operations with no recovery path. This is a speed optimization for the average case. It is a catastrophe for the degraded case.
A user fleeing a violent situation who accidentally submits a form with their location, or permanently deletes the only copy of critical evidence, cannot recover from that loss. The system's "simple" design has introduced irreversible harm.
The fix: Reversibility — undo is not a luxury feature. It is a safety requirement.
Analytics-driven product culture incentivizes collecting everything. Full behavioral telemetry, location history, device fingerprints, and social graphs are all "free signal." This data is also a liability that compounds with every user added to the system.
For vulnerable users, the liability is not abstract. A health system that logs symptom searches, a messaging app that stores social graph metadata, or a housing app that retains eviction inquiry history can expose users to discrimination, prosecution, or physical harm.
The fix: Exposure Minimization — collect only what is operationally necessary. Treat data as a liability, not an asset.
Most systems model security threats as external attackers trying to breach perimeter defenses. They do not model the user as a potential target of institutional coercion, domestic surveillance, or forced device disclosure.
A system that cannot be quickly wiped, that requires a cloud account to delete data, or that logs user actions to a server the user does not control, provides no protection when the threat is a coercive actor with physical access to the device — or the ability to subpoena your servers.
The fix: Coercion Resistance — design for the user's ability to protect themselves under duress, not just for your system's ability to resist external attack.
Performance budgets, bandwidth assumptions, and minimum device specifications are typically set to match mid-tier devices in wealthy markets. Low-end Android handsets, feature phones, shared tablets, or devices already constrained by dozens of background processes are never in the test suite.
The result: software that is unusable for users who most need it. Loading spinners on 2G. Out-of-memory crashes on 512 MB devices. Critical flows that time out before completing because the server assumes fast responses.
The fix: Degraded-Infrastructure Functionality — test on constrained hardware and degraded connections as a first-class requirement, not a post-launch concern.
All six failures above share a root: the design process treats instability as an edge case to be handled, not a condition to be designed for. Testing is done under favorable conditions. QA is done by well-resourced testers. Incident response plans assume the team can respond quickly.
This is the Stability Assumption. It is not a mistake any individual developer makes. It is baked into the industry's tooling, hiring, testing culture, and success metrics.
The fix: Protective Computing — a discipline that inverts the assumption and requires systems to prove they work under instability, not just under favorable conditions.
Instability is not a single condition. It is a cluster of simultaneous constraints that compound each other:
The users who face this cluster are not edge cases. They are chronically ill people managing health data on old phones. Refugees with intermittent connectivity. Domestic abuse survivors trying to document evidence without being monitored. Journalists protecting sources. Anyone whose operating environment falls below the assumptions baked into the software they depend on.
Protective Computing does not ask developers to solve poverty or geopolitics. It asks for a specific design stance: assume instability, design for degradation, verify under adversarial conditions.
Protective Computing — back to home