Journalism Person Vetter
A unified product for vetting anyone in the journalism ecosystem. Same Sonnet pipeline; framing per use case. Whether you are pitching a journalist, citing someone as a source, or researching a public-figure expert, the report adheres to the same eight frameworks and the same 12-section scoring rubric.
Overview
A Journalism Person Vetting Report is a per-person credibility synthesis. The user picks one of three use cases at submit time — pitch a journalist, cite as a source, or general research — and the same 12-section pipeline runs with the section emphasis and the headline-recommendation phrasing tailored to that use case.
This product replaces the prior separate Journalist Vetter and Source Vetter products. They overlapped about 70% on the underlying section coverage; the 30% difference was framing per use case. Forking that 30% into two SKUs created confusion (users picking the "wrong" Vetter for their use case) and duplicate maintenance. The unified product solves both.
Cost: 5 credits for a snapshot, 50 credits for a full vetting report. Same pricing as the prior Journalist Vetter / Source Vetter products; report depth and section count are unchanged.
'journalist' or 'source' with a backfilled journalism_use_case column.Three Use Cases
Pitch a journalist
For PR partners, sources, communications strategists.
- Beat depth + time on beat
- Outlet trajectory + recent volume
- Pitch openness signals (public posts, response patterns)
- Pitch Readiness Assessment in Section 11 (3 concrete pitch ideas)
- Headline: pitch / different beat / source for / avoid
Cite as a source
For journalists, fact-checkers, editors.
- Domain expertise + accuracy track record
- Disclosure transparency (financial, advisory, family)
- Independence from the subject of the reader's story
- Citation Advice in Section 11 (3 concrete citation patterns + reliability grade per topic)
- Headline: cite as primary / cite with caveat / verify before citing / avoid
General research
For acquirers, board chairs, regulators, due-diligence teams.
- Balanced coverage of all dimensions
- Domain credibility + accuracy + statement consistency + disclosure
- General-Research Summary in Section 11 (what-to-know / what-to-verify bullets)
- Headline: trustworthy / mostly-trustworthy / requires-verification / avoid
The Eight Frameworks We Adopt
ICD 203 — Analytic Standards (Office of the Director of National Intelligence)
The U.S. Intelligence Community’s Directive 203 defines nine tradecraft standards. We treat these as binding for every Journalism Person Vetting Report — particularly: properly described sources, proper expression of uncertainty, distinction between assumption and judgment, alternative analysis where relevant.
KSJ Three-Model Fact-Checking Framework
The Knight Science Journalism program at MIT defines three operating models for editorial fact-checking depth. Our scoring rubric and evidence floor (≥3 cited URLs per section, banned invented data) align with the KSJ Hybrid model — AI-first synthesis with explicit confidence calibration where evidence is thin.
UK PHIA Probability Yardstick (UK Defence Intelligence)
Every probabilistic claim is paired with a Low / Moderate / High confidence band. The seven-band yardstick (remote chance / highly unlikely / unlikely / realistic possibility / likely / highly likely / almost certain) is used for any aggregate-accuracy or undisclosed-COI inference.
The New Yorker Fact-Checking Standard
The New Yorker has, for decades, run the most demanding pre-publication fact-checking process in American journalism — every assertion verified against primary sources, often by direct subject callback. We do not replicate that workflow in real-time (a 200-credit report is generated in 3-5 minutes, not by a human checker over weeks). What we do replicate is the discipline: every claim cited inline, every primary source preferred over aggregator content, every audit trail preserved in Section 12.
IFCN Signatory Standards
The International Fact-Checking Network's Code of Principles guides Section 4 (Accuracy & Corrections Track Record). We surface corrections, retractions, IFCN-signatory callouts, and citation density in established fact-check projects (PolitiFact, Snopes, FactCheck.org, IFCN signatories worldwide).
Reuters Trust Principles
The Reuters Trust Principles on independence, integrity, and freedom from bias guide Section 6 (Conflicts of Interest & Disclosure) and Section 7 (Organizational Affiliations). Where a journalist's affiliations or financial holdings could materially color their coverage, the report surfaces them with PHIA confidence bands when inference is required.
BBC Editorial Guidelines
The BBC Editorial Guidelines on accuracy, impartiality, and the corrections protocol shape our own corrections policy (see Section 9 below). The BBC's "re-publication, not silent edit" rule is the model for how we handle errors flagged within the 30-day correction window.
ALCOA Data Integrity (FDA)
Every Journalism Person Vetting Report carries an ALCOA Methodology footer (Attributable / Legible / Contemporaneously / Original / Accurately) — the FDA's data-integrity framework, applied to per-person analytic products.
The Twelve Sections
| # | Section | What it covers |
|---|---|---|
| 1 | Executive Summary | Built last. Headline recommendation + 3 what's-good + 3 worth-digging-into bullets. |
| 2 | Credibility Assessment | 0-100 score with 4 sub-scores tailored to the chosen use case. |
| 3 | Expertise Domain & History | Where authority is verified vs claimed. |
| 4 | Accuracy & Corrections Track Record | Documented corrections, retractions, IFCN-style accuracy signals. |
| 5 | Outlet / Affiliation Trajectory | Chronological staff / freelance / advisory affiliations with current status. |
| 6 | Prior Public Statements & Consistency | Notable claims tracked over time — consistent / evolved / contradicted. |
| 7 | Conflicts of Interest & Disclosure | Financial holdings, sponsorships, family connections, social proximity to subject. |
| 8 | Organizational Affiliations | Boards, advisory roles, paid positions, fellowships. |
| 9 | Media Appearance Pattern | Outlets quoted by, frequency, beat-fit, peer-citation density. |
| 10 | Legal & Reputation History | Severity-ranked findings or honest "no issues found". |
| 11 | Use-Case Advice | Pitch Readiness / Citation Advice / General-Research Summary, depending on the use case picked at submit. |
| 12 | References & Source Citations | Aggregated audit trail, deduplicated, grouped by source class per ICD 206. |
Scoring Methodology
The Credibility Score is a 0-100 composite of four equally-weighted sub-scores. The four axes differ per use case but the scoring rubric is identical:
- Pitch a journalist: Beat depth (0-25) / Time on beat (0-25) / Volume on beat (0-25) / Pitch openness (0-25)
- Cite as a source: Domain expertise (0-25) / Track record of accuracy (0-25) / Disclosure transparency (0-25) / Independence from subject (0-25)
- General research: Domain credibility (0-25) / Track record of accuracy (0-25) / Public-statement consistency (0-25) / Disclosure transparency (0-25)
Each sub-score is paired with at least one cited URL from the subject's published work or third-party coverage. Sub-scores below 12/25 are interpreted as red-flag signals that the executive summary surfaces.
Data Sources — Free Public Only
- Subject's own published work — articles, blog posts, essays, podcast appearances, conference talks, panel videos.
- Press mentions — interviews of the subject, articles authored by the subject, articles where the subject is quoted.
- Outlet correction pages — every major outlet maintains a public correction archive.
- IFCN signatories + established fact-check projects — PolitiFact, Snopes, FactCheck.org, regional IFCN signatories.
- Public X / Twitter footprint — peer endorsements, response patterns, public posts about pitches / sources.
- Wikipedia / Wikidata / Crunchbase — affiliations, board roles, awards, biographical anchors.
- SEC EDGAR — financial disclosures for journalists / sources at SEC-registered entities.
- Court records (federal PACER, state-by-state) — litigation history, defamation suits.
- Serper — general press / news search across the above sources.
What we do NOT use: paywalled databases (Westlaw, LexisNexis), subscription press indexes, internal HR records, private fact-checking services.
Honest Limits
- We do not contact the subject or their colleagues directly. The New Yorker's primary-source-callback workflow is human + multi-week; we do not replicate it.
- We do not access paywalled databases (Westlaw, LexisNexis, JSTOR) or subscription press indexes.
- We do not give legal advice. Defamation / libel exposure is not evaluated.
- We do not predict whether a journalist will accept a pitch or whether a source's claims are true. We surface evidence and confidence; editorial decisions remain with the user.
- We do not invent claims to fill thin sections. "[insufficient public evidence as of $TODAY]" is the explicit fallback.
- We do not score "bias" in absolute terms. Section 7 (Style / Ideology Footprint, where applicable) reports verifiable observations only — word frequencies, citation patterns, sourcing patterns — never partisan labels without evidence.
Corrections Policy
Three commitments modeled on the BBC editorial corrections protocol:
- Identification window. Errors flagged within thirty days of report generation are corrected on the canonical view URL within five business days.
- Re-publication, not silent edit. Corrections preserve a redline diff between the original and corrected text, time-stamped, with a one-line explanation.
- Subject right of reply. The person named in any Vetting Report may submit a one-paragraph factual rebuttal to corrections@mentionfox.com. Verifiable rebuttals attach to the report alongside the original section.
Data integrity floor — ALCOA. Every Journalism Person Vetting Report carries an ALCOA Methodology footer.
References
- ICD 203 — Analytic Standards — Office of the Director of National Intelligence (2015).
- Knight Science Journalism Program at MIT — Three-Model Fact-Checking framework reference.
- UK PHIA Probability Yardstick.
- The New Yorker fact-checking process (reference essay).
- IFCN Code of Principles — International Fact-Checking Network.
- Reuters Trust Principles.
- BBC Editorial Guidelines.
- FDA Data Integrity and Compliance With Drug CGMP — ALCOA principles.
- ICD 206 — Sourcing Requirements for Disseminated Analytic Products.
Methodology v1.0 · Published 2026-05-04 · Verifierce / MentionFox · Replaces Journalist + Source VetterVerification Vetter methodology →