Skip to content

Live compliance and technical quality scorecard for the catalogue. Tracks clause-level regulatory coverage across 60 frameworks and per-category quality grades across all use cases. Built for auditors validating coverage claims, programme managers tracking compliance posture, and contributors gauging catalogue health. Every number is computed automatically by CI from the source JSON — no manual editing.

Global rollup

loading
Loading scorecard data…

Four headline metrics summarise catalogue health. Clause coverage is the percentage of common regulatory clauses addressed by at least one use case. Priority-weighted adjusts that by clause importance, so high-priority gaps weigh more. Assurance further discounts by evidence strength — a clause only fully counts when backed by strong provenance. Tech quality is the weighted composite across all use-case categories (references, freshness, MITRE mapping, samples, and more). Together they answer: how much of the regulatory landscape do we cover, and how trustworthy is that coverage?

Clause coverage
common clauses covered by at least one UC
Priority-weighted
coverage weighted by clause priority
Assurance
multiplied by evidence strength
Tech quality
weighted composite across 23 categories

Compliance coverage

Clause-level regulatory coverage across 60 frameworks. All three percentages are computed by scripts/audit_compliance_mappings.py against data/regulations.json and stored in reports/compliance-coverage.json. Methodology is documented in docs/coverage-methodology.md.

By tier

Loading tier rollups…

Per regulation

Regulation Tier Version Clause % Priority % Assurance % Clauses

Audit findings

Snapshot from the most recent scripts/audit_compliance_mappings.py run. Findings are structural validation results (clause grammar, regulation references, tier classification). New errors fail CI; baselined warnings are tracked separately.

Loading audit findings…

Technical quality (per category)

Per-category quality grade across six dimensions (references, provenance authority, freshness, known false positives, MITRE ATT&CK coverage, sample fixtures). Generated by scripts/generate_scorecard.py; methodology in docs/scorecard.md.

Cat Name UCs Refs % KFP % MITRE % Prov. Samples % Composite Grade

Machine-readable artifacts

Every percentage on this page is computed from these static files. Fork the repo, diff the JSON, or wire them into your own CI to gate builds on compliance posture.