Skip to main content
Every result includes three simultaneous verdicts, each calibrated to a different false positive rate. Build policies that route automatically at high confidence and escalate to human review at lower confidence. Every result includes a tier_verdicts object:
"tier_verdicts": {
  "press_safe": null,
  "human_safe": "suspicious",
  "recall": "suspicious"
}

The three tiers

Press-Safe

~0% false positive rateNear-zero FPR on human recordings. Fires on proto_score_v1 >= 0.807.Certification, public attestation, licensing submissions.

Human-Safe

~1–2% false positive rateLow FPR on human recordings.Supply chain gating, catalog intake, auto-reject workflows.

Recall

~5–10% false positive rateWider net, higher recall.Manual review queues, audit sampling, secondary screening.

How to read tier_verdicts

Each tier value is one of:
ValueMeaning
"suspicious"Evidence meets this tier’s threshold — call it AI at this confidence level
"human"Evidence falls below this tier’s threshold — passes at this confidence level
nullTier not evaluated (press-safe requires the press-safe model enabled; see below)
Tiers are independent. A track can be "suspicious" at recall but "human" at human-safe — that means it’s borderline and should go to manual review.

Typical patterns

// Confident AI detection — flagged at all tiers
{ "press_safe": "suspicious", "human_safe": "suspicious", "recall": "suspicious" }

// Borderline — only caught at recall (escalate to manual review)
{ "press_safe": null, "human_safe": "human", "recall": "suspicious" }

// Clear human — passes all tiers
{ "press_safe": "human", "human_safe": "human", "recall": "human" }

// AI confirmed at human-safe but not press-safe (inconclusive for certification)
{ "press_safe": null, "human_safe": "suspicious", "recall": "suspicious" }

Press-safe and proto_score_v1

Press-safe uses a proprietary acoustic analysis that remains stable across compression, mastering, and loudness variation. The underlying score is proto_score_v1. Press-safe fires when proto_score_v1 >= 0.807. press_safe shows null if:
  • Press-safe scoring is not enabled for your organization
  • The track is too short to score reliably
Contact support to enable press-safe scoring for your account.

Choosing a tier for your use case

Use caseRecommended tierWhy
Certification / public attestationpress_safeNear-zero FPR protects reputation
Supply chain auto-rejecthuman_safeBlocks AI confidently, low collateral damage to human artists
Escalation to manual reviewrecallCast a wider net, let humans decide borderline cases
Internal audit / researchAll threeFull picture for analysis
See the full use-case guides: