Independent Review & Critique
We invite auditors, security researchers, accessibility experts, and field practitioners to review Protective Computing Specification v1.0 and our reference implementations. Your feedback is critical for credibility and institutional integrity.
Protective Computing transitioned from thought leadership to a normative standard with RFC 2119 language, formal threat model, and compliance levels. This is a significant institutional claim.
We need independent critique to validate that claim. We are explicitly NOT seeking confidence — we are seeking honest assessment of:
- Are the principles internally consistent and non-redundant?
- Is the threat model baseline realistic and well-scoped?
- Are compliance levels achievable and meaningfully differentiated?
- Does the reference implementation methodology demonstrate honest assessment (not marketing)?
- Are there gaps in the specification that future versions must address?
- Where does Protective Computing conflict with other standards (privacy law, security standards, accessibility)?
Who We're Looking For
Security Researchers & Cryptographers
- Review threat model: Are baseline adversaries well-chosen? Any glaring omissions?
- Review cryptography assumptions: Do compliance levels match actual security properties?
- Test reference implementation: Does PainTracker actually provide claimed protections?
Practitioners (Healthtech, Civic Tech, Nonprofit Tech)
- Apply principles to your own systems: Are they implementable?
- Document your compliance: Submit your own reference implementations
- Report blockers: Where does Protective Computing conflict with your constraints?
Accessibility Experts (Universal Design)
- Review Degraded Functionality principle: Does it align with WCAG, disability justice, universal design principles?
- Audit reference implementation: Where does PainTracker fall short on accessibility?
- Propose improvements: How should the spec address accessibility gaps?
Privacy Engineers & Legal Experts
- Consider data protection law: How does Protective Computing align with GDPR, CCPA, other frameworks?
- Identify regulatory gaps: Where does the spec need legal grounding?
- Review threat model: Is it consistent with privacy law's assumptions about harm?
Institutional Review Boards / Research Ethics
- Evaluate research integrity: Is the specification suitable as a reference for human-centered design research?
- Review design claims: Do principles make testable, falsifiable claims suitable for evidence-based design?
How to Contribute Your Review
Open an issue on the GitHub repository with:
- Title: Clear description of the critique (e.g., "Threat Model: Missing State-Sponsored Insider Threats")
- Type: Label as
review,spec-feedback,reference-impl-feedback, orgap - Body: Your assessment, evidence, and recommended change (if applicable)
- Attribution: Your name/affiliation (can be anonymous if preferred)
Response time: We commit to reviewing and responding to issues within 14 days.
For issues that require confidentiality (e.g., security vulnerabilities in reference implementation), email:
review @ protective-computing . org
Response time: We will respond within 7 days and discuss publication timeline for urgent feedback.
Build a compliance mapping for your own system (following the PainTracker template). We will:
- Review your assessment for methodology integrity
- Link it from this site and in the sitemap (with your consent)
- Use your feedback to improve the specification
Start by opening a GitHub issue titled "Reference Implementation: [Your System]" and we'll coordinate.
What Feedback Is Most Valuable
| Type | Example | Impact |
|---|---|---|
| Internal contradictions | "Coercion Resistance (Level 3) requires deniability, but Reversibility principle doesn't allow hidden deletion—these conflict." | Highest — directly affects spec validity |
| Evidence-based gap | "Specification assumes devices are not compromised (malware). But in [context], app-level encryption is useless without OS security. This should be explicit in threat model." | Highest — improves scope clarity |
| Implementation evidence | "I tried to build Level 4 compliance for X principle and found [specific blocker]. This may need a roadmap adjustment." | High — grounds spec in practice |
| Comparative analysis | "Protective Computing's Degraded Functionality is similar to ISO 27001's 'availability' but with different priorities. How do you want this relationship documented?" | High — positions spec in landscape |
| Concern about phrasing | "The definition of 'essential utility' in the spec could be misinterpreted to justify [harmful situation]. Suggest tightening language." | Medium — improves clarity |
| Feature requests | "Would you ever add a principle for [new thing]?" | Lower priority — v2.0 discussion |
How We Handle Your Feedback
- Acknowledgment: All feedback receives a response within 7–14 days
- Triage: We categorize feedback as:
spec-change-required— affects v1.1 or v2.0 roadmapclarification-needed— docs expansion without version changedisagree— we'll explain why and provide rationaleacknowledged-future— valuable but deferred (version TBD)
- Transparency: All feedback and responses are public (unless marked confidential) and published in a public review log
- Attribution: Feedback credited to you (with your consent) in acknowledgments section and release notes
- Versioning: Feedback that triggers spec changes is tied to version history (e.g., "v1.1 incorporates feedback from [names]")
Criteria for Reference Implementation Review
If you're submitting your own reference implementation, we'll use this framework to evaluate methodology integrity:
| Criterion | What We're Looking For |
|---|---|
| Honesty about gaps | Clear documentation of where compliance is partial or missing; no cherry-picking examples |
| Reproducible verification | Audit checkpoints are specific, technical, and could be run by an independent auditor |
| Threat model alignment | Clearly states which baseline threats are resisted vs. not; doesn't overstate protection |
| Trade-offs documented | Explains design choices (e.g., "We prioritized availability over full Coercion Resistance because...") |
| Roadmap clarity | Future versions and missing features are explicit; no vague promises |
| Scope declaration | Clear about who should and shouldn't use the system |
Review Timeline & Specification Roadmap
We're currently accepting feedback for the following roadmap:
- v1.0 (Current): Formal specification with RFC 2119 language, threat model, compliance levels
- v1.1 (Q2 2026): Incorporates critical feedback from this review period; documentation improvements
- v2.0 (2027): Major revision; new principle(s) if evidence suggests necessary; possible new threat model categories
Feedback received by June 30, 2026 will be considered for v1.1. Feedback after that date will inform v2.0 planning.
Questions About the Review Process?
Use GitHub Discussions for process questions or Issues for specific feedback
For questions about the review process or confidential feedback
Our Commitment to Transparency
- We will not: Dismiss or ignore critical feedback because it challenges our design. We will respond with evidence and rationale.
- We will: Publish a quarterly review log showing feedback, our responses, and status of incorporated changes
- We will: Credit all contributors by name (unless anonymous request) in spec versions and release notes
- We will: Maintain a "Rejected Feedback & Rationale" section to document where we deliberately chose not to incorporate suggestions and why
- We will NOT: Consider feedback proprietary; all reviews are part of public record (unless confidentiality requested)