Rating Input & Scoring Schedule
This page is the public scoring schedule for the current TrackForge M58 v1.1 AAA-D catalogue rating. It is designed for buyers, lawyers, auditors, technical reviewers, and catalogue owners who need to understand exactly how a frozen rating input becomes a numeric score and a grade.
It should be read with the overview Rating Methodology (AAA-D). That page explains why the methodology exists. This page specifies the scoring contract.
Transparency Boundary
TrackForge publishes the parts of the system that a verifier needs in order to check the rating:
- the rating input shape;
- the public scoring factors;
- the row-level risk-point schedule;
- the catalogue-level cap rules;
- the AAA-D threshold table;
- the hashes and proof files that bind a rating event to its exact inputs.
TrackForge does not publish the private detector playbook: source-specific SQL, graph traversals, resolver thresholds, society-specific scraping logic, matching heuristics, or the full internal research history behind every detector. Those are proprietary detection methods. They are also upstream of the rating contract.
The distinction is deliberate. A verifier must be able to recompute the grade from the certified rating input bundle. A verifier does not need to independently rediscover every leakage condition from raw society data in order to verify that TrackForge applied the published scoring schedule consistently.
Inputs To The Rating Function
The catalogue rating function is deterministic:
rating_event =
f(
methodology_version,
methodology_hash,
workflow_version,
taxonomy_version,
jurisdictional_scope,
original_ingested_population,
leakage_vector,
agreement_binding_summary,
iswc_coverage_summary,
merkle_root,
snapshot_time
)
For catalogue-sale diligence, the denominator is the original ingested catalogue population where that population is available. Later cleaning, matching, or remediation does not remove a row from the original rating event. A later remediation creates a later rating event.
Row-Level Rating Input
Each rated population row is represented in the leakage vector as M58-CATALOGUE-TRACK-RISK. The row is not a secret classification. It is the public scoring projection of the frozen evidence state.
Current row materiality inputs include:
| Field | Meaning |
|---|---|
population_id | Stable identifier for the row in the original ingested catalogue population. |
population_source | Source table or intake population used as the rating denominator. |
source_row_number | Source file row number where available. |
isrc | Recording identifier supplied or resolved for the population row. |
tunecode | PRS/SearchWorks work identifier where available. |
title | Title supplied or resolved for the population row. |
matched_snapshot_id | Sealed evidence snapshot matched to the population row, or null if none matched. |
active_finding_count | Count of active results-page findings attached to the row. |
critical_count | Active critical findings attached to the row. |
high_count | Active high-severity findings attached to the row. |
medium_count | Active medium-severity findings attached to the row. |
low_count | Active low-severity findings attached to the row. |
agreement_binding_state | PRS/SearchWorks WACD agreement-binding state for the row. |
iswc_coverage_state | ISWC coverage state for the row. |
population_flags | Population-integrity flags such as missing ISRC, duplicate ISRC, missing work identity, or no matched evidence snapshot. |
population_risk_points | Risk points added by population-integrity flags. |
finding_risk_points | Risk points added by active finding severity counts. |
agreement_risk_points | Risk points added by agreement-binding state. |
iswc_risk_points | Risk points added by ISWC state. |
total_risk_points | Sum of row risk points before the 100-point row cap. |
capped_total_risk_points | Row risk points after the 100-point cap. |
track_score | Row score after risk points are subtracted from 100. |
The row score is calculated as:
total_risk_points =
population_risk_points
+ finding_risk_points
+ agreement_risk_points
+ iswc_risk_points
capped_total_risk_points = min(total_risk_points, 100)
track_score = max(0, 100 - capped_total_risk_points)
Results-Page Findings
The results page is the operational expression of the leakage taxonomy for a catalogue run. Its active findings are grouped for users into industry-facing categories, but the rating consumes the deterministic severity counts attached to the rated population row.
The public scoring schedule does not publish every internal detector rule. Instead, it publishes how the scoring function treats active findings once the run has projected them into the frozen rating input:
| Public scoring input | State | Risk points |
|---|---|---|
| Active finding severity | critical finding | 22 each, up to 3 per row |
| Active finding severity | high finding | 13 each, up to 3 per row |
| Active finding severity | medium finding | 6 each, up to 3 per row |
| Active finding severity | low finding | 2 each, up to 3 per row |
The "up to 3 per row" rule prevents a single row with many similar findings from dominating the entire catalogue by count alone. Severe structural defects are handled again at catalogue level through caps.
Agreement-Binding Inputs
For UK composition pathways, TrackForge treats PRS/SearchWorks WACD agreement binding as a direct rating input.
agreement_binding_state | Meaning | Risk points |
|---|---|---|
agreement_bound | The track/work is explicitly tied to a specific client-controlled PRS/SearchWorks agreement. | 0 |
inferred_or_assumed | There is ownership, work, tunecode, income, or society evidence, but no explicit agreement binding in the frozen evidence. | 18 |
missing_or_conflicting | No reliable client agreement route exists, or the evidence contradicts the expected account path. | 35 |
not_applicable | Agreement binding is outside the declared rights, society, or workflow scope. | 0 |
This is a sale-diligence input, not merely a payment input. A work may still receive income without explicit agreement binding, but that income is weaker because PRS or ICE may treat it as inferred rather than determined. That creates counterclaim and dispute risk for a buyer.
ISWC Inputs
ISWC coverage is scored separately from agreement binding because it affects work identity and society cross-reference.
iswc_coverage_state | Meaning | Risk points |
|---|---|---|
present | Exactly one reliable ISWC is present for the row. | 0 |
missing | No ISWC is present where composition identity is in scope. | 10 |
conflict | More than one conflicting ISWC is present for the row. | 20 |
not_applicable | ISWC coverage is outside the declared workflow scope. | 0 |
Missing ISWC is not always a direct payment leak by itself. It is still a rating factor because it weakens work resolution, registration repair, and diligence certainty.
Population-Integrity Inputs
Population defects are scored because the catalogue denominator is the original ingested population. A row that cannot be matched or resolved is part of the risk profile; it is not silently removed.
| Population flag | Meaning | Risk points |
|---|---|---|
missing_isrc_count | The population row has no usable ISRC. | 28 |
unmatched_snapshot_count | The population row has no matched sealed evidence snapshot. | 30 |
duplicate_isrc_count | The same ISRC appears more than once in the rating population. | 8 |
missing_title_count | The population row has no usable title. | 6 |
missing_work_identity_count | The row has no tunecode, ISWC, or writer evidence. | 10 |
These points are additive with active findings, agreement risk, and ISWC risk.
Catalogue Aggregation
The raw catalogue score is the arithmetic average of all row scores:
raw_catalogue_score = sum(track_score) / rated_population_count
The final numeric score is the lower of that raw average and every applicable catalogue cap:
final_score = min(raw_catalogue_score, applicable_catalogue_caps)
Caps are important because some defects should prevent a high grade even if they affect only part of the catalogue. For example, a catalogue with no agreement-bound PRS works is not a high-grade sale-diligence asset merely because some rows have clean metadata.
Catalogue Caps
Current M58 v1.1 cap rules:
| Rule ID | Condition | Maximum score |
|---|---|---|
critical_findings_cap_bbb | Any critical active finding exists. | 72 |
any_missing_or_conflicting_agreement_cap_a | Any row has missing or conflicting agreement binding. | 82 |
any_inferred_agreement_cap_aa | Any row has inferred or assumed agreement binding. | 90 |
zero_bound_agreement_with_agreement_risk_cap_bbb | Zero rows are agreement-bound and at least one row carries agreement risk. | 72 |
agreement_risk_quarter_population_cap_a | Agreement-risk rows are at least 25% of the population. | 82 |
catalog_population_missing_agreement_quarter_cap_ccc | Original population is used and missing/conflicting agreement rows are at least 25% of the population. | 40 |
catalog_population_zero_bound_agreement_cap_ccc | Original population is used, zero rows are agreement-bound, and at least one row carries agreement risk. | 40 |
any_iswc_conflict_cap_a | Any ISWC conflict exists. | 82 |
any_missing_iswc_cap_aa | Any missing ISWC exists. | 90 |
iswc_risk_quarter_population_cap_a | ISWC-risk rows are at least 25% of the population. | 82 |
catalog_population_missing_iswc_half_cap_ccc | Original population is used and missing ISWCs are at least 50% of the population. | 40 |
catalog_population_missing_isrc_tenth_cap_ccc | Original population is used and missing ISRCs are at least 10% of the population. | 40 |
catalog_population_unmatched_snapshot_tenth_cap_ccc | Original population is used and unmatched rows are at least 10% of the population. | 40 |
Numeric Score To Rating
| Minimum final score | Rating |
|---|---|
| 95 | AAA |
| 90 | AA |
| 82 | A |
| 72 | BBB |
| 62 | BB |
| 52 | B |
| 40 | CCC |
| 30 | CC |
| 20 | C |
| 0 | D |
Worked Example
Consider a three-row catalogue:
| Row | Inputs | Risk points | Row score |
|---|---|---|---|
| 1 | agreement-bound, ISWC present, no active findings | 0 | 100 |
| 2 | one high finding, inferred agreement, ISWC missing | 13 + 18 + 10 = 41 | 59 |
| 3 | no ISRC, no matched snapshot, missing work identity, missing/conflicting agreement, ISWC missing | 28 + 30 + 10 + 35 + 10 = 113, capped to 100 | 0 |
The raw average is:
(100 + 59 + 0) / 3 = 53.0
The catalogue has one missing/conflicting agreement row out of three, so the catalog_population_missing_agreement_quarter_cap_ccc cap applies. The final score is therefore:
min(53.0, 40) = 40
The catalogue receives CCC, not B, because the structural agreement defect is too material to average away.
Verification Requirements
A verifier checking an M58 v1.1 rating should be able to inspect:
methodology_versionandmethodology_hash;workflow_version;taxonomy_version;jurisdictional_scope;rating_scale;rating_input_hash;leakage_vector_hash;- the ordered leakage vector;
- row-level materiality inputs and row scores;
- agreement-binding summary counts;
- ISWC coverage summary counts;
- the original population count used as denominator;
- the Merkle/current root and proof state;
- snapshot links for sealed evidence rows.
The verification task is:
- Confirm the proof files hash to the values stated in
manifest.json. - Confirm
proof/leakage_vector.jsonhashes to the rating event'sleakage_vector_hash. - Recompute row risk points and row scores from this scoring schedule.
- Average the row scores across the rating denominator.
- Apply every applicable catalogue cap.
- Map the final score to AAA-D.
- Confirm the recomputed score and rating match
proof/rating_event.json. - Confirm the rating event hashes to the stated
rating_input_hash. - Confirm the methodology hash and proof root match the certified event.
Internal Taxonomy Relationship
The results-page taxonomy is the detection layer. It contains internal identifiers and detector families such as RL-*, XD-*, DL-*, and PO-*, plus source-specific evidence and remediation context.
The rating input is the public scoring projection. It records the severity counts, agreement state, ISWC state, population flags, evidence references, and row risk points needed to reproduce the grade. That projection protects two things at once:
- Verifier rights - a buyer or auditor can check the grade without trusting TrackForge's narrative.
- Detection IP - TrackForge does not publish the full detector recipe book that created the results-page findings.
If TrackForge changes how it detects an internal failure mode but emits the same public rating input, the scoring methodology has not changed. If TrackForge changes points, caps, thresholds, input states, denominator rules, or proof requirements, the methodology must be versioned and historical ratings remain bound to the version and hash they were issued under.