Coercion Resistance
Users can maintain confidentiality and integrity under threat. Systems resist forced disclosure, extraction, and tampering.
What it is: Coercion Resistance means the system is designed so that neither users nor administrators can be forced (under threat, legal compulsion, or torture) to disclose, modify, or destroy user data.
Why it matters: Vulnerable populations face hostile actors: authoritarian states, abusive partners, criminals, institutional oppressors. A system that can be forced to surrender data is not safe for them. Conversely, if the system is architected so that surrender is impossible or provably unhelpful, coercion loses its power.
When to use: Any system serving users under threat must incorporate coercion resistance. This includes dissident communication, abuse-survivor support, refugee networks, and privacy-critical services in hostile contexts.
Why Coercion Resistance Matters
Adversaries use coercion when compromise fails. An attacker who cannot hack your system might instead:
- Threaten a user or administrator: "Give me the password or we harm you"
- Use legal compulsion: court orders, subpoenas, "national security" letters
- Physically torture for keys, passphrases, or biometric access
- Plant malware that triggers data destruction if not given credentials
- Demand that administrators modify the system or insert backdoors
Against these threats, traditional security is useless. You cannot encrypt away coercion. You must architect so that:
- Surrender buys nothing: Even if coerced parties comply, they cannot grant access to data they don't possess.
- Denial is provable: Victims can demonstrate they don't have what's demanded (no private keys, no administrator access).
- Destruction is automatic: If a threshold is crossed (failed login attempts, time without access), sensitive data self-destructs before coercers can exploit it.
- Plausible deniability: Even if data is compromised, its origin or authenticity is ambiguous.
Implementation Patterns
Zero-Knowledge Architecture
Design so the system (and administrators) never possess plaintext user data. Users hold encryption keys; the system stores only ciphertext.
- User encrypts data locally with their key before transmission
- System stores encrypted blob; cannot decrypt even with full database access
- User decrypts locally on their device
- Consequence: Coercers cannot extract data; system genuinely cannot provide it
Example: Signal stores messages encrypted until recipient downloads them. Server holds ciphertext only. If compromised, attackers get bytes they cannot use.
Passphrase-Based Encryption (Not Passwords)
Passwords are weak under coercion (users can be forced to reveal them). Passphrases are stronger: long, memorable, and secret even from the user who created them.
- Use high-entropy passphrases (ideally 6+ random words, e.g., "correct-horse-battery-staple")
- Derive encryption keys from passphrases using slow KDFs (PBKDF2, Scrypt, Argon2)
- Never reveal the key to administrators or systems
- Consequence: Even if user is coerced to reveal passphrase, an attacker must still brute-force the key derivation (expensive)
Plausible Deniability
Architecture where data's existence or authenticity is ambiguous:
- Decoy accounts: User maintains a secondary account with benign data. Under coercion, they reveal this account, and coercers cannot know if it's real or decoy.
- Hidden containers: Encrypted volumes appear to contain innocuous data; hidden volume contains sensitive data. Coercers cannot prove the hidden volume exists.
- Steganography: Sensitive data is hidden in innocuous files (images, audio) using imperceptible encoding. Coercers cannot detect it.
- Deniable messaging: Recipient can forge messages appearing to come from sender. No message is provably authentic, defeating forced testimony.
Dead Man's Switch
Automatic data destruction if preconditions fail:
- User must authenticate periodically (e.g., every 24 hours). If they don't, keys are destroyed.
- If user is kidnapped and cannot authenticate, encryption keys auto-destruct after timeout.
- Consequence: Coercers cannot extract keys because keys no longer exist
- Risk: User must remember to authenticate; network outages can trigger unwanted destruction
Decentralized Authority
Require threshold-of-N key reconstruction: no single person or entity can grant access.
- Split encryption key into N shares; need K shares to reconstruct (Shamir's Secret Sharing)
- Distribute shares among trusted parties (distinct locations, jurisdictions)
- No single coercion target can grant access; attacker must threaten N people simultaneously
- Consequence: Significantly raises cost and coordination burden for attackers
Anti-Patterns (What to Avoid)
Hiding vulnerability rather than eliminating it. System appears secure but surrenders instantly under coercion.
Consequence: False sense of safety. Real victims are harmed when system fails.
System designed so administrators can decrypt any user data. Under coercion, administrators become single points of failure.
Consequence: Coercers target administrators instead; identical breach surface.
Passphrases encrypted with fast KDF (MD5, SHA-1). Coercers brute-force passphrases trivially.
Consequence: Passphrase protection is illusory.
Fingerprint, face, or iris unlock. Under coercion, biometric can be used against user (forced unlock).
Consequence: User cannot deny access or plausibly claim they don't know the credential.
Data persists indefinitely. If system is compromised or coercer gains access, all history is available.
Consequence: Zero mechanism to protect past communications.
Real-World Examples
Signal — End-to-End Encryption with Forward Secrecy
What it is: Messaging system where messages are encrypted end-to-end; neither Signal servers nor any third party can read them.
How it works:
- Each message is encrypted under an ephemeral key that changes with every message
- Keys are deleted after use (forward secrecy)
- Even if server is compromised, attackers cannot decrypt past messages
- Server stores encrypted messages temporarily; users fetch and delete
- Signal cannot be forced to decrypt: it has no keys
Coercion resistance: Authorities can compel Signal to insert backdoors, but they cannot compel decryption of past messages (keys no longer exist).
Briar — Offline Messaging Over Tor/Bluetooth
What it is: Messaging app for activists that works over Tor and Bluetooth mesh; works entirely offline.
How it works:
- Messages encrypted end-to-end; synced via Tor or portable devices
- No central server to compel or compromise
- Works even if internet is cut (governments cannot censor via blocking)**
- App code is open-source; users can verify no backdoors exist
Coercion resistance: Extremely high. Attacker must physically seize devices to access encrypted data. Even then, passphrase protection resists decryption.
VeraCrypt — Hidden Volumes (Plausible Deniability)
What it is: Disk encryption tool that supports hidden encrypted volumes within larger encrypted volumes.
How it works:
- User encrypts outer volume with one password; hidden volume within uses different password
- To outside observer, hidden volume does not exist; space appears to be random padding
- Under coercion, user can reveal outer password; hidden volume remains secure
Coercion resistance: User can plausibly claim there is no hidden volume. Coercers cannot prove otherwise.
Anti-Example: WhatsApp Server Records
The problem: WhatsApp (despite end-to-end encryption) stores metadata: who contacted whom and when. This metadata is decryptable and can be subpoenaed.
Consequence: Government can determine who is communicating but not what they say. For many purposes, this is sufficient to identify dissidents or targets.
Lesson: Coercion resistance requires defending metadata, not just message content.
Scope and Applicability
When to prioritize Coercion Resistance:
- Users in authoritarian contexts (political dissidents, LGBTQ+ in hostile regions, refugees)
- Abuse-survivor support systems (domestic violence, trafficking)
- Sensitive communication (journalist sources, medical privacy, legal privilege)
- Systems operating in jurisdictions with aggressive surveillance (mass data retention laws, forced backdoors)
- Systems where data disclosure would endanger users (political networks, anti-government organizing)
When you might defer Coercion Resistance:
- Systems operating in stable democracies with strong privacy law (still build it; just lower urgency)
- Internal enterprise tools where users are not under external threat
- Systems where compliance (lawful disclosure under court order) is non-negotiable
Never defer for systems serving vulnerable users. If your users face threat, their physical safety depends on it.
Synthesis Lineage: Disciplinary Roots
Coercion Resistance formalizes patterns established across cryptography, security, and activism:
Cryptanalysis & Forward Secrecy
Modern cryptography (Diffie-Hellman, 2006+) emphasizes ephemeral keys and forward secrecy (Lor, Jager et al.). The insight: even if long-term keys are compromise, past sessions remain secure if session keys are deleted after use.
Protective Computing applies: Users need assurance that past communications cannot be decrypted, even if system is later compromised or coerced.
Plausible Deniability & Steganography
Security research (Anderson, Petitcolas) and cryptography (Rivest's "Chaffing and Winnowing") show that data can be hidden such that its existence is deniable. This protects against coercion: victim can truthfully claim "I don't have what you're asking for."
Protective Computing applies: Victims should be able to deny access to sensitive data without lying, making coercion strategically useless.
Zero-Knowledge Proofs
Mathematical cryptography (Goldwasser, Micali) enables systems where authority is proven without disclosure. User can prove they know a secret without revealing it.
Protective Computing applies: Systems should never require users to disclose secrets; proof of knowledge is sufficient.
Activist & Dissident Security Practices
Security research emerging from journalists, human rights organizations, and surveillance researchers documents coercion methods and countermeasures (EFF, Freedom of the Press Foundation, Guardian Project).
Protective Computing formalizes: Patterns observed in practice by activists and dissidents facing real coercion.
Relationship to Other Principles
Coercion Resistance depends on:
- Local Authority: If users retain local copies of keys and data, coercers cannot compel servers to disclose.
- Exposure Minimization: Less data collected = fewer attack surfaces. Minimizing stored data reduces coercion targets.
Coercion Resistance enables:
- Reversibility: Deniable messaging allows users to deny authenticity, enabling plausible deniability about past communications.
Next Steps
For system designers:
- Identify your users: Are they under threat? From whom?
- Map coercion vectors: How might attackers pressure users or administrators?
- Eliminate master keys: If system administrators have decryption access, coercers will target them.
- Implement zero-knowledge: Users encrypt locally; system stores only ciphertext.
- Add key derivation: Passphrases → keys via Argon2 or Scrypt (not fast hashes).
- Consider deniability: Can users plausibly claim sensitive data does not exist?
- Plan destruction: How are keys and data destroyed if system is compromised or user is threatened?
Related Principles
- Reversibility: Deniable messaging makes communications non-authentic; users can plausibly deny past communications.
- Exposure Minimization: Minimize data collected = minimize attack surface for forced extraction.
- Local Authority: Users holding encryption keys locally means coercers cannot compel server disclosure.
- Degraded Functionality: Coercion resistance may require simpler architecture; complexity reduces security.
- Essential Utility: Systems for vulnerable users must prioritize security over convenience.
Next principle to explore: