Skip to main content
Version: v1.1

Rating Input & Scoring Schedule

This page is the public scoring schedule for the current TrackForge M58 v1.1 AAA-D catalogue rating. It is designed for buyers, lawyers, auditors, technical reviewers, and catalogue owners who need to understand exactly how a frozen rating input becomes a numeric score and a grade.

It should be read with the overview Rating Methodology (AAA-D). That page explains why the methodology exists. This page specifies the scoring contract.

Transparency Boundary

TrackForge publishes the parts of the system that a verifier needs in order to check the rating:

  • the rating input shape;
  • the public scoring factors;
  • the row-level risk-point schedule;
  • the catalogue-level cap rules;
  • the AAA-D threshold table;
  • the hashes and proof files that bind a rating event to its exact inputs.

TrackForge does not publish the private detector playbook: source-specific SQL, graph traversals, resolver thresholds, society-specific scraping logic, matching heuristics, or the full internal research history behind every detector. Those are proprietary detection methods. They are also upstream of the rating contract.

The distinction is deliberate. A verifier must be able to recompute the grade from the certified rating input bundle. A verifier does not need to independently rediscover every leakage condition from raw society data in order to verify that TrackForge applied the published scoring schedule consistently.

Inputs To The Rating Function

The catalogue rating function is deterministic:

rating_event =
f(
methodology_version,
methodology_hash,
workflow_version,
taxonomy_version,
jurisdictional_scope,
original_ingested_population,
leakage_vector,
agreement_binding_summary,
iswc_coverage_summary,
merkle_root,
snapshot_time
)

For catalogue-sale diligence, the denominator is the original ingested catalogue population where that population is available. Later cleaning, matching, or remediation does not remove a row from the original rating event. A later remediation creates a later rating event.

Row-Level Rating Input

Each rated population row is represented in the leakage vector as M58-CATALOGUE-TRACK-RISK. The row is not a secret classification. It is the public scoring projection of the frozen evidence state.

Current row materiality inputs include:

FieldMeaning
population_idStable identifier for the row in the original ingested catalogue population.
population_sourceSource table or intake population used as the rating denominator.
source_row_numberSource file row number where available.
isrcRecording identifier supplied or resolved for the population row.
tunecodePRS/SearchWorks work identifier where available.
titleTitle supplied or resolved for the population row.
matched_snapshot_idSealed evidence snapshot matched to the population row, or null if none matched.
active_finding_countCount of active results-page findings attached to the row.
critical_countActive critical findings attached to the row.
high_countActive high-severity findings attached to the row.
medium_countActive medium-severity findings attached to the row.
low_countActive low-severity findings attached to the row.
agreement_binding_statePRS/SearchWorks WACD agreement-binding state for the row.
iswc_coverage_stateISWC coverage state for the row.
population_flagsPopulation-integrity flags such as missing ISRC, duplicate ISRC, missing work identity, or no matched evidence snapshot.
population_risk_pointsRisk points added by population-integrity flags.
finding_risk_pointsRisk points added by active finding severity counts.
agreement_risk_pointsRisk points added by agreement-binding state.
iswc_risk_pointsRisk points added by ISWC state.
total_risk_pointsSum of row risk points before the 100-point row cap.
capped_total_risk_pointsRow risk points after the 100-point cap.
track_scoreRow score after risk points are subtracted from 100.

The row score is calculated as:

total_risk_points =
population_risk_points
+ finding_risk_points
+ agreement_risk_points
+ iswc_risk_points

capped_total_risk_points = min(total_risk_points, 100)

track_score = max(0, 100 - capped_total_risk_points)

Results-Page Findings

The results page is the operational expression of the leakage taxonomy for a catalogue run. Its active findings are grouped for users into industry-facing categories, but the rating consumes the deterministic severity counts attached to the rated population row.

The public scoring schedule does not publish every internal detector rule. Instead, it publishes how the scoring function treats active findings once the run has projected them into the frozen rating input:

Public scoring inputStateRisk points
Active finding severitycritical finding22 each, up to 3 per row
Active finding severityhigh finding13 each, up to 3 per row
Active finding severitymedium finding6 each, up to 3 per row
Active finding severitylow finding2 each, up to 3 per row

The "up to 3 per row" rule prevents a single row with many similar findings from dominating the entire catalogue by count alone. Severe structural defects are handled again at catalogue level through caps.

Agreement-Binding Inputs

For UK composition pathways, TrackForge treats PRS/SearchWorks WACD agreement binding as a direct rating input.

agreement_binding_stateMeaningRisk points
agreement_boundThe track/work is explicitly tied to a specific client-controlled PRS/SearchWorks agreement.0
inferred_or_assumedThere is ownership, work, tunecode, income, or society evidence, but no explicit agreement binding in the frozen evidence.18
missing_or_conflictingNo reliable client agreement route exists, or the evidence contradicts the expected account path.35
not_applicableAgreement binding is outside the declared rights, society, or workflow scope.0

This is a sale-diligence input, not merely a payment input. A work may still receive income without explicit agreement binding, but that income is weaker because PRS or ICE may treat it as inferred rather than determined. That creates counterclaim and dispute risk for a buyer.

ISWC Inputs

ISWC coverage is scored separately from agreement binding because it affects work identity and society cross-reference.

iswc_coverage_stateMeaningRisk points
presentExactly one reliable ISWC is present for the row.0
missingNo ISWC is present where composition identity is in scope.10
conflictMore than one conflicting ISWC is present for the row.20
not_applicableISWC coverage is outside the declared workflow scope.0

Missing ISWC is not always a direct payment leak by itself. It is still a rating factor because it weakens work resolution, registration repair, and diligence certainty.

Population-Integrity Inputs

Population defects are scored because the catalogue denominator is the original ingested population. A row that cannot be matched or resolved is part of the risk profile; it is not silently removed.

Population flagMeaningRisk points
missing_isrc_countThe population row has no usable ISRC.28
unmatched_snapshot_countThe population row has no matched sealed evidence snapshot.30
duplicate_isrc_countThe same ISRC appears more than once in the rating population.8
missing_title_countThe population row has no usable title.6
missing_work_identity_countThe row has no tunecode, ISWC, or writer evidence.10

These points are additive with active findings, agreement risk, and ISWC risk.

Catalogue Aggregation

The raw catalogue score is the arithmetic average of all row scores:

raw_catalogue_score = sum(track_score) / rated_population_count

The final numeric score is the lower of that raw average and every applicable catalogue cap:

final_score = min(raw_catalogue_score, applicable_catalogue_caps)

Caps are important because some defects should prevent a high grade even if they affect only part of the catalogue. For example, a catalogue with no agreement-bound PRS works is not a high-grade sale-diligence asset merely because some rows have clean metadata.

Catalogue Caps

Current M58 v1.1 cap rules:

Rule IDConditionMaximum score
critical_findings_cap_bbbAny critical active finding exists.72
any_missing_or_conflicting_agreement_cap_aAny row has missing or conflicting agreement binding.82
any_inferred_agreement_cap_aaAny row has inferred or assumed agreement binding.90
zero_bound_agreement_with_agreement_risk_cap_bbbZero rows are agreement-bound and at least one row carries agreement risk.72
agreement_risk_quarter_population_cap_aAgreement-risk rows are at least 25% of the population.82
catalog_population_missing_agreement_quarter_cap_cccOriginal population is used and missing/conflicting agreement rows are at least 25% of the population.40
catalog_population_zero_bound_agreement_cap_cccOriginal population is used, zero rows are agreement-bound, and at least one row carries agreement risk.40
any_iswc_conflict_cap_aAny ISWC conflict exists.82
any_missing_iswc_cap_aaAny missing ISWC exists.90
iswc_risk_quarter_population_cap_aISWC-risk rows are at least 25% of the population.82
catalog_population_missing_iswc_half_cap_cccOriginal population is used and missing ISWCs are at least 50% of the population.40
catalog_population_missing_isrc_tenth_cap_cccOriginal population is used and missing ISRCs are at least 10% of the population.40
catalog_population_unmatched_snapshot_tenth_cap_cccOriginal population is used and unmatched rows are at least 10% of the population.40

Numeric Score To Rating

Minimum final scoreRating
95AAA
90AA
82A
72BBB
62BB
52B
40CCC
30CC
20C
0D

Worked Example

Consider a three-row catalogue:

RowInputsRisk pointsRow score
1agreement-bound, ISWC present, no active findings0100
2one high finding, inferred agreement, ISWC missing13 + 18 + 10 = 4159
3no ISRC, no matched snapshot, missing work identity, missing/conflicting agreement, ISWC missing28 + 30 + 10 + 35 + 10 = 113, capped to 1000

The raw average is:

(100 + 59 + 0) / 3 = 53.0

The catalogue has one missing/conflicting agreement row out of three, so the catalog_population_missing_agreement_quarter_cap_ccc cap applies. The final score is therefore:

min(53.0, 40) = 40

The catalogue receives CCC, not B, because the structural agreement defect is too material to average away.

Verification Requirements

A verifier checking an M58 v1.1 rating should be able to inspect:

  • methodology_version and methodology_hash;
  • workflow_version;
  • taxonomy_version;
  • jurisdictional_scope;
  • rating_scale;
  • rating_input_hash;
  • leakage_vector_hash;
  • the ordered leakage vector;
  • row-level materiality inputs and row scores;
  • agreement-binding summary counts;
  • ISWC coverage summary counts;
  • the original population count used as denominator;
  • the Merkle/current root and proof state;
  • snapshot links for sealed evidence rows.

The verification task is:

  1. Confirm the proof files hash to the values stated in manifest.json.
  2. Confirm proof/leakage_vector.json hashes to the rating event's leakage_vector_hash.
  3. Recompute row risk points and row scores from this scoring schedule.
  4. Average the row scores across the rating denominator.
  5. Apply every applicable catalogue cap.
  6. Map the final score to AAA-D.
  7. Confirm the recomputed score and rating match proof/rating_event.json.
  8. Confirm the rating event hashes to the stated rating_input_hash.
  9. Confirm the methodology hash and proof root match the certified event.

Internal Taxonomy Relationship

The results-page taxonomy is the detection layer. It contains internal identifiers and detector families such as RL-*, XD-*, DL-*, and PO-*, plus source-specific evidence and remediation context.

The rating input is the public scoring projection. It records the severity counts, agreement state, ISWC state, population flags, evidence references, and row risk points needed to reproduce the grade. That projection protects two things at once:

  • Verifier rights - a buyer or auditor can check the grade without trusting TrackForge's narrative.
  • Detection IP - TrackForge does not publish the full detector recipe book that created the results-page findings.

If TrackForge changes how it detects an internal failure mode but emits the same public rating input, the scoring methodology has not changed. If TrackForge changes points, caps, thresholds, input states, denominator rules, or proof requirements, the methodology must be versioned and historical ratings remain bound to the version and hash they were issued under.