Protective Computing Specification v1.0
Version 1.0 | Published February 25, 2026 | Status: Stable
Formal specification defining principles, threat model, and compliance criteria for systems serving users under conditions of human vulnerability and institutional threat.
Protective Computing is a discipline for designing systems that serve users reliably under conditions of human vulnerability, institutional threat, and resource scarcity.
This specification defines:
- Formal threat model baseline (authoritarian surveillance, coercion, censorship)
- Six core principles with normative requirements (RFC 2119 language)
- Stability assumptions and architectural constraints
- Compliance maturity levels (Level 1–4)
- Versioning and deprecation policy
Conformance to this specification certifies that a system prioritizes user autonomy, resilience, and welfare over competing objectives (profit, engagement, efficiency).
1. Definitions & Keywords
Key Terms
User: An individual utilizing a system to accomplish an essential purpose, potentially under threat from institutional or adversarial actors.
Threat: Institutional coercion (legal compulsion, arrest, torture), mass surveillance (data collection without consent), censorship (access denial), or device seizure.
Resilience: Continued system functionality under degraded conditions—limited bandwidth, power, compute, or when administrators are compromised.
Stability Assumption: Infrastructure is unreliable; users have scarce cognitive, temporal, and computational resources; adversaries are sophisticated and persistent.
Attestation: Third-party verification that a system meets specified Protective Computing compliance level.
Normative Language: Conformance keywords from RFC 2119:
- MUST — absolute requirement; system must implement
- SHOULD — strong recommendation; justified deviation requires documentation
- MAY — optional; system may choose implementation approach
2. Threat Model
Protective Computing covers threats from:
- State Surveillance Apparatus: Mass data retention, metadata analysis, pattern-of-life tracking
- Authoritarian Coercion: Forced access demands, torture, threats of violence against user or family
- Institutional Control: Account freezing, content removal, access denial via central authority
- Network Tampering: DDoS attacks, packet injection, man-in-the-middle interception, DNS manipulation
- Device Seizure: Theft, confiscation, forensic extraction (assumes secure passphrase resistance)
Out of Scope
- Cryptanalysis of broken algorithms: Assumes only vetted NIST/IETF-approved cryptography
- 0-day software vulnerability exploitation: Assumes responsible vulnerability disclosure and patching
- Post-quantum attacks: Assumes quantum computers do not pose immediate threat (will be address in v2.0)
- Physical access after authentication: If passphrase is compromised, all bets are off (user responsibility)
Conservative Design Assumption
Systems MUST assume:
- Networks are hostile (encryption required for all sensitive data in transit)
- Administrators cannot be trusted (system must not store plaintext user data)
- Infrastructure will fail (graceful degradation required; offline operation where feasible)
- Users are under constant monitoring (minimize data footprint; assume worst-case disclosure)
3. Stability Assumption
This specification is stable for all v1.x releases (v1.0, v1.1, v1.2, etc.). A system claiming Protective Computing v1.0 compliance will remain compliant through v1.9, with no breaking changes.
Stability Guarantees
Systems claiming v1.0 compliance MUST:
- Implement all six core principles at specified compliance level
- Document threat model assumptions and scope
- Provide evidence of third-party audit (Level 3+)
- Maintain compliance as v1.x minor revisions are released
- Declare version-specific compliance: e.g., "Protective Computing v1.0 Level 3"
Version Stability Mechanics
| Release Type | Example | Breaking Changes? | Backward Compatible? |
|---|---|---|---|
| Patch | v1.0.1 → v1.0.2 | No | Yes (exact) |
| Minor | v1.0 → v1.1 | No | Yes (v1.0 systems remain compliant) |
| Major | v1.9 → v2.0 | Possible | No (v1.x and v2.0 coexist; migration window provided) |
4. Core Principles (Normative)
All six principles are mandatory for Protective Computing compliance. Systems claiming compliance must implement all six at their declared level.
4.1 Reversibility
Principle: User actions and system changes MUST be undoable within documented recovery windows. Failures MUST NOT become irreversible harm.
- MUST provide undo/undo mechanism for all destructive user actions (delete, modify, publish)
- MUST display recovery window duration to user (e.g., "Item will be permanently deleted in 30 days")
- SHOULD maintain complete version history with point-in-time rollback capability
- MUST NOT permanently delete user data without explicit confirmation + mandatory delay (minimum 7 days)
- MUST document which system actions are reversible and which are irreversible
- MAY implement administrator-initiated recovery in enterprise contexts (with audit logging)
| Compliance Level | Requirement |
|---|---|
| Level 1 | Soft deletion: recovered data visible for 7+ days before permanent erasure |
| Level 2 | Undo/redo stack: user can reverse actions within session |
| Level 3 | Complete version history with point-in-time restore; 90+ day retention |
| Level 4 | ACID transaction semantics; atomic operation guarantees across all data stores |
4.2 Exposure Minimization
Principle: Data MUST be collected only when essential. What is collected MUST be defended with cryptography. Data retention MUST be minimal and automatic.
- MUST perform data minimization audit: justify every data field collected
- MUST encrypt sensitive data at rest (AES-256 minimum; ChaCha20 acceptable)
- MUST use TLS 1.3+ for all data in transit; TLS 1.2 acceptable with strong ciphersuites only
- MUST define explicit retention policy for every data field (max age before auto-delete)
- SHOULD implement zero-knowledge architecture: user holds encryption keys; system stores only ciphertext
- MUST NOT sell, share, or broker user data without explicit informed consent (opt-in, not opt-out)
- MAY aggregate/anonymize data for statistics with explicit user consent
| Compliance Level | Requirement |
|---|---|
| Level 1 | Data minimization audit completed; AES-256 at rest; TLS 1.3+ in transit |
| Level 2 | Level 1 + auto-deletion policies for all fields; no ad networks or trackers |
| Level 3 | Level 2 + zero-knowledge for sensitive data; user holds encryption keys; regular key rotation |
| Level 4 | Level 3 + provable data minimization via differential privacy; formal verification of encryption implementation |
4.3 Local Authority
Principle: Users MUST retain control and function locally. Systems MUST NOT require internet for essential operations.
- MUST support offline operation for all essential user workflows
- MUST cache essential data on user's device; users maintain local copy
- MUST sync gracefully without blocking user workflow (asynchronous, eventual consistency)
- SHOULD use eventual consistency, not strong global consistency
- MUST NOT require authentication for offline access to cached user data
- MAY use server as optional backup/sync point, not gatekeeper to user's own data
- MUST document offline vs. online feature parity and sync behavior
| Compliance Level | Requirement |
|---|---|
| Level 1 | Offline read capability; cached data available without network |
| Level 2 | Full offline operation: read + write; changes sync when connectivity returns |
| Level 3 | Multi-device sync with automatic conflict resolution (CRDT or operational transforms) |
| Level 4 | Peer-to-peer sync capability; no central server required; direct device-to-device replication |
4.4 Coercion Resistance
Principle: Users MUST be able to maintain confidentiality and integrity even under physical or legal coercion.
- MUST use encryption where users hold keys; system cannot decrypt user data
- MUST support strong passphrases (minimum 128 bits entropy; 6+ words recommended)
- MUST use slow key derivation (Argon2id, Scrypt; never MD5/SHA-1)
- MUST NOT provide administrative backdoors or master keys
- SHOULD implement plausible deniability features (decoy accounts, hidden data)
- MAY implement dead man's switch (automatic data destruction if conditions unmet)
- MUST document threat model clearly: what adversaries can and cannot extract
| Compliance Level | Requirement |
|---|---|
| Level 1 | Strong encryption; user-held keys; no master backdoors |
| Level 2 | Level 1 + passphrase-based encryption (Argon2+); slow key derivation |
| Level 3 | Level 2 + plausible deniability (decoy accounts or hidden containers); formal threat model documentation |
| Level 4 | Level 3 + threshold key splitting (Shamir's Secret Sharing); distributed key reconstruction |
4.5 Degraded Functionality
Principle: Systems MUST remain usable when bandwidth, power, compute, or cognition are severely constrained.
- MUST test baseline path on 2G networks (<100KB initial HTML load)
- MUST function on devices with <512MB RAM
- MUST support complete keyboard-only navigation (no mouse required)
- MUST gracefully degrade features under resource constraints
- SHOULD implement progressive enhancement (plain HTML works without JavaScript)
- MUST NOT auto-load media (user must explicitly request video/audio)
- MUST meet WCAG 2.1 Level AA accessibility standards (color contrast, screen reader support)
| Compliance Level | Requirement |
|---|---|
| Level 1 | Mobile responsive; plaintext/minimal CSS fallback on JS failure |
| Level 2 | Level 1 + tested on 2G networks; works with <512MB RAM; no large media autoload |
| Level 3 | Level 2 + progressive enhancement (HTML baseline works without JS); keyboard-only navigation; WCAG AA |
| Level 4 | Level 3 + measurable performance targets (<3s load on 2G); documented accessibility audit |
4.6 Essential Utility
Principle: Systems MUST optimize for user survival and autonomy. Features MUST serve essential needs, not engagement metrics or extraction.
- MUST document essential use cases explicitly; justify every feature
- MUST NOT use dark patterns (hidden friction-to-exit, manipulative notifications, deceptive confirmations)
- SHOULD minimize cognitive load (one action per screen; plain language; visible defaults)
- MUST NOT include addictive mechanics (streaks, variable rewards, leaderboards, FOMO notifications)
- MUST measure success by user goal completion, not engagement time or feature adoption
- SHOULD fund transparently (donations, grants, nonprofit support; not user data sales)
- MUST NOT require payment for essential features (accessibility tiers acceptable; never paywall core utility)
| Compliance Level | Requirement |
|---|---|
| Level 1 | No engagement metrics; no dark patterns; transparent funding model |
| Level 2 | Level 1 + minimal cognitive load; success measured by user goal completion |
| Level 3 | Level 2 + independent audit for dark patterns; user satisfaction metrics |
| Level 4 | Level 3 + user-controlled feature set (customizable UI/UX); annual compliance audit |
5. Principle Interdependency
The six principles are mutually reinforcing. Weakness in one undermines others. This section formalizes the relationships.
Reinforcement Matrix
| Principle A | Principle B | Relationship |
|---|---|---|
| Reversibility | Exposure Minimization | Synergistic: Less data kept = recovery windows are simpler. Undo/redo prevent data harm. |
| Local Authority | All Principles | Foundational: Offline operation enables resilience across all principles. Systems that replicate locally can enforce all others. |
| Coercion Resistance | Exposure Minimization + Local Authority | Required: User-held keys require local data. Minimal data collection reduces extraction targets. |
| Degraded Functionality | Essential Utility | Aligned: Prioritizing essential features directly maps to graceful degradation under scarcity. |
| Essential Utility | All Principles | Overriding: User welfare is the master constraint. All principles serve it. |
Known Tensions
- Strong Consistency vs. Local Authority: Requiring quorum consensus conflicts with offline operation. Resolution: Protective Computing prioritizes availability (Local Authority) over consistency.
- Server-Side Processing vs. Exposure Minimization: Efficient computation on servers reveals data. Resolution: Accept performance cost; process on user device when possible.
- Feature Richness vs. Degraded Functionality: Complex UX fails on constrained devices. Resolution: Essential features work everywhere; enhanced features for users with resources.
6. Compliance Levels
All systems claiming Protective Computing compliance MUST declare a compliance level (1–4) for each principle. A system's overall compliance is determined by its weakest principle.
Example claim: "Signal Messenger is Protective Computing v1.0 Level 3: Reversibility (L3), Exposure Minimization (L4), Local Authority (L2), Coercion Resistance (L4), Degraded Functionality (L2), Essential Utility (L4)"
Level 1: Foundation
System implements all six principles at basic level. No independent audit required.
- Suitable for: Internal tools, MVPs, non-critical systems, prototype validation
- Effort: Minimal; engineering team self-certifies compliance
- Claim format: "Protective Computing v1.0 Level 1"
Level 2: Standard
System implements all six principles with documented depth. Minimal external review recommended.
- Suitable for: Production systems, public tools, significant user bases
- Effort: Moderate; security code review recommended (not required)
- Claim format: "Protective Computing v1.0 Level 2"
Level 3: Verified
Independent security audit verifies compliance. Threat model formally documented.
- Suitable for: High-risk contexts, vulnerable populations, critical infrastructure
- Effort: Significant; requires third-party security audit ($5k–$50k depending on scope)
- Claim format: "Protective Computing v1.0 Level 3 (Auditor: [organization name])"
- Requirements: Third-party audit report, threat model documentation, compliance matrix
Level 4: Certified
Formal certification by Protective Computing Foundation (TBD). Continuous compliance monitoring. Annual recertification.
- Suitable for: Enterprise, government, mission-critical, production infrastructure
- Effort: Substantial; requires formal audit + ongoing compliance engagement
- Claim format: "Protective Computing v1.0 Level 4 Certified (Cert ID: [ID])"
- Requirements: Formal certification audit, code review, public registry listing, annual audit
7. Versioning Policy
Protective Computing uses Semantic Versioning: v[Major].[Minor].[Patch]
Release Tiers
Patch Releases (v1.0.0 → v1.0.1)
- Content: Bug fixes, clarifications, errata corrections
- Breaking Changes: None
- Backward Compatibility: Full (v1.0.0 compliant systems remain compliant on v1.0.1)
- Migration Required: No
Minor Releases (v1.0 → v1.1)
- Content: New implementation guidance, clarifications to threat model, additional examples
- Breaking Changes: None
- Backward Compatibility: Full (v1.0 compliant systems remain compliant on v1.1)
- Migration Required: No; optional upgrade to benefit from new guidance
- Example changes: New domain-specific extensions (healthcare, journalism); new threat scenarios
Major Releases (v1.x → v2.0)
- Triggers: New principle added, threat model fundamentally expanded, major architectural paradigm shift
- Breaking Changes: Possible
- Backward Compatibility: NO (v1.x systems do not automatically comply with v2.0)
- Migration Window: v1.x and v2.0 coexist for minimum 2 years; systems can claim dual compliance (e.g., "v1.4 and v2.0")
- Not Triggered By: New tools, new domain applications, new certifiers, performance improvements
Example Timeline
v1.0(Feb 2026): Six principles, threat model, compliance framework — Stablev1.1(Q4 2026): Healthcare extension, journalism guidance, new threat scenarios — Backward compatiblev1.2(Q2 2027): Mobile-specific guidance, accessibility expansion — Backward compatiblev2.0(2028–2030): New principle (e.g., "Transparency") or revised threat model — Breaking changes possible; coexistence window required
8. Non-Scope
Protective Computing does not address:
- Specific cryptographic algorithms: Delegates to NIST, IETF, and cryptanalysis community. Specification is agnostic to algorithm choice (as long as vetted).
- Programming languages or platforms: Language-agnostic. Applies to web, mobile, desktop, IoT, systems software.
- Custom threat models: Specification covers baseline. Organizations extending to additional threats are encouraged; extension must be documented.
- Performance targets: No "systems MUST load in <1 second" requirements (varies by context). Baseline: works on 2G networks.
- User interface aesthetics: Design is not specified. "Beautiful" is not required; "usable" is.
- Organizational governance: Non-profit vs. for-profit, team structure, funding source not prescribed (as long as transparent).
Protective Computing is ORTHOGONAL to (not instead of):
- Legal Compliance: GDPR, CCPA, HIPAA, etc. Protective Computing + Legal Compliance are complementary requirements.
- Security Standards: ISO 27001, SOC 2, Common Criteria. Protective Computing works alongside these.
- Accessibility Standards: WCAG 2.1, Section 508. Protective Computing includes accessibility; additional standards are compatible.
Systems may (and should):
- Claim Protective Computing compliance AND GDPR compliance simultaneously
- Achieve Protective Computing Level 4 AND SOC 2 Type II certification
- Be WCAG AAA AND Protective Computing Level 3
FORMAL SPECIFICATION ENDS HERE
For further reading, see:
- Getting Started with Protective Computing — Practical implementation guide
- Annex: MUST Justifications — Defensibility ledger for normative requirements
- Principle Deep-Dives — Detailed reference documentation for each principle
- The Overton Framework (DOI) — Full disciplinary documentation