← Back to Home Audit This Site Independent Review
External Compression Layer

Legitimacy & Evidence

Protective Computing is a design discipline for software operating under human vulnerability, coercion, instability, and degraded conditions. This page is the shortest path for outsiders to answer four questions: what this is, what proof exists, how it maps to recognized standards, and how another builder can apply it without adopting the founder narrative first.

Current posture
Foundational, not yet externally hardened

Public canon, normative spec, audit artifacts, and a reference implementation exist. Independent reviews, inter-rater scoring, and formal standards crosswalk depth are the next legitimacy threshold.

Bridge statement

Protective Computing does not replace NIST, ISO, SOC 2, or OWASP. It identifies human failure conditions those frameworks often under-specify, then translates them into verifiable architecture, defaults, controls, and evidence.

What This Is

Discipline

A bounded design standard

Protective Computing defines system behavior when users face coercion, unstable infrastructure, degraded devices, or institutional scrutiny. The normative core is published in the specification, principles index, threat model, and MUST-justifications ledger.

Applicability

Usable beyond one worldview or one domain

The pattern language is not limited to health apps, crisis contexts, or one product. Builders can start with a threat model, audit checklist, evidence packet structure, and reference implementation mapping without adopting the full canon first.

Who Can Verify It

Independent roles

Reviewers do not need to agree with the discipline to test it

  • Privacy engineers validating data-minimization and retention claims
  • Security auditors reviewing control boundaries, egress, and evidence quality
  • Software architects testing degraded-mode survivability and local authority
  • Clinicians, disability advocates, or trauma-informed practitioners checking real-world failure assumptions
  • Researchers or founders applying the rubric to their own systems
Current public review path

Inspectability is already open

Public review hooks already exist, but they are still lightly populated. The current review entry points are the audit page, evidence index, independent-review invitation, and issue-template planning artifacts.

Missing proof: named external assessments, reviewer packets, and published disagreement logs.

Proof Beyond Founder Claims

Layer Current proof External anchor Missing proof Next action
Theory Canon, Overton Framework, public specification NIST AI RMF governance structure, NIST Privacy Framework risk language Concise third-party citations or literature positioning Publish a short academic abstract and citation-ready overview
Engineering Patterns, threat models, MUST ledger, implementation specs OWASP ASVS, ISO/IEC 27001 control expectations Formal crosswalk packet and implementation checklists per standard Expand standard mappings into a dedicated crosswalk artifact set
Evaluation PLS rubric draft and LaTeX release candidate NIST Measure/Manage patterns, SOC 2-style evidence framing Inter-rater reliability evidence and revision log Run one public scored walkthrough with multiple raters
Reference systems PainTracker mapping and audit artifacts Application security and privacy review baselines Reference packet with explicit limitations and negative claims Package PainTracker as a versioned reference packet
Governance Versioned repository, CI audit gates, issue template planning Open RFC and changelog discipline used in mature standards projects Public contribution and review process with acceptance criteria Publish RFC workflow and reviewer-facing governance notes

Standards Crosswalk Direction

The current repository already includes compliance mapping and audit evidence. The next externalization step is a standards crosswalk set that states, with bounded language, how Protective Computing complements recognized frameworks.

Machine-Readable and Repeatable Proof

Already present

Versioned evidence and CI gates

Protective Computing already publishes audit commands, CI gate definitions, generated artifacts, and normative claims. That makes the work inspectable rather than purely rhetorical.

Long game

Signed evidence, not just prose

Future conformance claims can move toward signed packets, release-bound trust bundles, and machine-readable disclosures. That path aligns with broader work on verifiable credentials without claiming present-day certification.

What Another Builder Can Reuse Now

Another team should be able to take the discipline and apply it without importing founder biography, commercial services, or the full canon first. The current reusable starter surface is:

Still missing: a starter kit and conformance claim template.

Boundary Discipline

Legitimacy depends on clear exclusions as much as ambitious claims. Protective Computing is not generic privacy branding, not compliance theater, and not a softness overlay on conventional software. It is design under instability with auditable behavior requirements.