tier_verdicts object:
The three tiers
Press-Safe
~0% false positive rateNear-zero FPR on human recordings. Fires on
proto_score_v1 >= 0.807.Certification, public attestation, licensing submissions.Human-Safe
~1–2% false positive rateLow FPR on human recordings.Supply chain gating, catalog intake, auto-reject workflows.
Recall
~5–10% false positive rateWider net, higher recall.Manual review queues, audit sampling, secondary screening.
How to read tier_verdicts
Each tier value is one of:
| Value | Meaning |
|---|---|
"suspicious" | Evidence meets this tier’s threshold — call it AI at this confidence level |
"human" | Evidence falls below this tier’s threshold — passes at this confidence level |
null | Tier not evaluated (press-safe requires the press-safe model enabled; see below) |
"suspicious" at recall but "human" at human-safe — that means it’s borderline and should go to manual review.
Typical patterns
Press-safe and proto_score_v1
Press-safe uses a proprietary acoustic analysis that remains stable across compression, mastering, and loudness variation. The underlying score is proto_score_v1. Press-safe fires when proto_score_v1 >= 0.807.
press_safe shows null if:
- Press-safe scoring is not enabled for your organization
- The track is too short to score reliably
Choosing a tier for your use case
| Use case | Recommended tier | Why |
|---|---|---|
| Certification / public attestation | press_safe | Near-zero FPR protects reputation |
| Supply chain auto-reject | human_safe | Blocks AI confidently, low collateral damage to human artists |
| Escalation to manual review | recall | Cast a wider net, let humans decide borderline cases |
| Internal audit / research | All three | Full picture for analysis |