← Back to Home
Coercion Resistance Principle →
Designing for Coercion Resistance
How to audit software for coercion risk and design systems that protect users under institutional threat, duress, and forced disclosure.
Coercion resistance is not a niche security concern. It is a structural requirement for any software used by people in high-risk environments: domestic abuse survivors, journalists, refugees, whistleblowers, patients with sensitive diagnoses, or anyone whose safety depends on controlling access to their own data.
Most security engineering addresses external attackers. Coercion resistance addresses a different threat model: what happens when the attacker has physical access to the user, the device, or the legal authority to compel disclosure?
The Coercion Threat Model
Coercion takes several forms. Each requires a different design response:
Threat Type 1
Physical Device Access
An abuser, authority, or adversary has physical access to the user's device — either with or without the user's cooperation. They can see the screen, read files, inspect installed apps, and access anything that does not require additional authentication.
Threat Type 2
Forced Authentication
The user is compelled to unlock the device or provide their password under duress. This may be a legal order (border crossing, law enforcement), a coercive relationship (abusive partner, employer), or direct physical threat.
Threat Type 3
Server-Side Subpoena or Data Request
A government, law enforcement agency, or legal adversary compels your infrastructure to disclose user data. You do not need to be malicious for this to harm your users — you only need to hold data they did not want disclosed.
Threat Type 4
Surveillance by Proximity
Someone with access to the same network, the same device, or the same household can observe the user's behavior — not necessarily the content, but the metadata. Who they contacted, when they opened an app, what they searched for.
How to Audit for Coercion Risk
For each threat type above, ask these questions about your system:
Physical Device Access Audit
- What is visible without unlocking the device? (Notification previews, app names, lock-screen widgets?)
- What is accessible without authentication after the device is already unlocked?
- Can the user quickly hide, disguise, or wipe sensitive content from an unlocked device?
- Does the app appear in the recent-apps list with a screenshot of the last screen visible?
- Are there any locally stored files, photos, or exports that persist after sessions close?
Forced Authentication Audit
- Does the app support a duress PIN or password that reveals only a decoy or empty state?
- Can users quickly delete all local data from a pre-authenticated screen?
- Is biometric authentication the only option, or are passphrases also supported?
- Does the system allow remote wipe of local data if the device is not in the user's control?
- Are there any recovery codes or backup credentials stored somewhere the attacker could find?
Server-Side Data Risk Audit
- What data would you be compelled to hand over if served a legal request?
- Is any sensitive data encrypted with user-held keys (zero-knowledge), so you cannot decrypt it even if compelled?
- Do your logs contain identifying information about user behavior?
- Are there retention schedules that limit how long sensitive data is held?
- Is your infrastructure jurisdiction clear to users? Do they know which legal system governs their data?
Proximity Surveillance Audit
- Does the app leak metadata over the network that would reveal it is being used?
- Are push notifications encrypted end-to-end, or could a network observer read them?
- Does the app transmit data in the background, creating observable traffic patterns?
- Could someone on the same Wi-Fi network determine what the user is doing based on traffic analysis?
Coercion Resistance Design Patterns
Pattern 1: Rapid Wipe
Provide a single-action mechanism for the user to wipe or hide sensitive local data quickly. This should work without going through account settings, without network access, and without a confirmation dialog that takes more than one tap.
Implementation notes
- Use a shake gesture, long-press, or hidden trigger — documented somewhere the user can set it up before they need it
- Wipe should remove local data, cached content, and session tokens
- Consider showing a benign "app not installed" or blank screen after wipe, not an error
- Do not require a network connection to perform local wipe
Pattern 2: Minimal Server Footprint
Store the minimum on your servers. What you do not hold, you cannot be compelled to disclose. For sensitive apps, consider a zero-knowledge architecture where the server holds only encrypted blobs and the user holds the decryption key.
Implementation notes
- Map all data stored server-side; question each field's necessity
- Encrypt sensitive content client-side before upload, using keys derived from user credentials
- Log only what is operationally required; rotate and purge logs on a defined schedule
- Publish a transparency report or data inventory so users know what you hold
Pattern 3: Honest Security Claims
The most common coercion resistance failure is not a design failure — it is a documentation failure. Apps claim protection they do not provide. Users trust those claims and are harmed when they fail under pressure.
Implementation notes
- Do not claim "end-to-end encrypted" unless the server genuinely cannot decrypt user content
- Document exactly what is and is not protected under each threat type
- Be explicit about jurisdiction and what legal requests you would comply with
- Update security documentation when your architecture changes
Pattern 4: Exposure Minimization by Default
The less data you collect, the less damage coercion can do. Default settings should minimize data collection. Users who want to share more can opt in.
Implementation notes
- Analytics, telemetry, and behavioral tracking are off by default
- Contact lists, location, and social graph access require explicit, per-use permission
- Data minimization applies to metadata as well as content (timestamps, read receipts, typing indicators)
Anti-Patterns to Remove
- Mandatory cloud backup: If sensitive data is automatically synced to your servers without user consent, you hold it and can be compelled to disclose it
- Persistent session tokens: Long-lived tokens mean a device that is accessed later still grants full access
- Account recovery via SMS: SMS is trivially intercepted in many threat models; offer passphrase-based alternatives
- Screenshot-capturing recent-apps views: The OS thumbnail of the last screen is visible without unlocking — sensitive screens should be masked
- Notification content previews: Content in notifications is visible without unlocking; default to hiding content
Relationship to Other Principles
Coercion resistance works in conjunction with:
- Exposure Minimization: The less data held, the less that coercion can extract
- Local Authority: Data the user controls locally is harder to compel from a third party
- Reversibility: The ability to undo actions includes the ability to undo disclosures where technically possible
Related Resources
Protective Computing — back to home