This page has moved. The Journalist Vetter has been merged into the unified Journalism Person Vetter alongside the former Source Vetter. Redirecting in a moment, or click through.
Journalist Vetting Report · Methodology
Methodology Declaration

Journalist Vetting Report

How a 200-credit Journalist Vetting Report is produced. The published frameworks we adhere to, the limits we will not pretend to overcome, and the corrections process if we get something wrong.

Overview

A Journalist Vetting Report is a paginated, twelve-section due-diligence document on one journalist. It is generated on demand from the journalist's published article archive, public-platform footprint, peer citations, and the subject's own enriched profile. It takes three to five minutes to produce, costs 50 credits (about $20 USD), and is delivered as a shareable HTML report with a printable PDF view.

It is intended for a PR firm partner deciding whether to invest pitch effort, a communications strategist building a press list, a source assessing whether to go on the record with a journalist they don't yet know, an editor evaluating a freelancer for a hire, or a fact-checker reviewing a peer's track record before re-publishing their work.

The report is not a verdict on the journalist. It is a structured presentation of the public record for the reader to evaluate themselves. Every claim cites a public URL or is flagged as insufficient evidence. Every probability is expressed in the seven-band UK PHIA Yardstick vocabulary plus an analytical confidence rating. Every error identified within thirty days of publication is corrected, re-published, and the redline is preserved.

What this report is NOT. Not a credibility verdict (we surface the public record; you decide). Not an attempt to discredit the journalist (red-flags sections default to "no flags found" when none surface). Not a substitute for direct outreach (often the right next step is to email the journalist and ask).

The Four Frameworks We Adopt

MentionFox / Verifierce Journalist Vetting Reports align with four published standards. They were not written for journalist Due Diligence specifically — they were written for the U.S. intelligence community, MIT science journalists, the United Kingdom's Defence Intelligence agency, and The New Yorker fact-checking desk — but they translate cleanly to journalism research, and adopting them publicly is the single most credible thing a research vendor can do.

ICD 203 — Analytic Standards (Office of the Director of National Intelligence)

The U.S. Intelligence Community's Directive 203 defines nine tradecraft standards that any piece of finished intelligence must meet: properly described sources, proper expression of uncertainty, distinction between intelligence and assumptions, incorporation of alternative analysis, judgement of consequences, customer-relevant focus, logical argumentation, accurate reflection of source content, and clear language. We treat these as binding for every Journalist Vetting Report.

KSJ Three Models of Fact-Checking (Knight Science Journalism, MIT)

Deborah Blum's KSJ Fact-Checking Project at MIT documents three distinct working models: pre-publication line-by-line verification (the New Yorker model — see next section), post-publication rapid response (the Washington Post Fact Checker model), and structured public-knowledge correction (the BBC editorial model). A Journalist Vetting Report borrows from all three. Sonnet drafts each section under cited-evidence constraints (pre-publication discipline). The corrections section below describes our post-publication response. And the methodology page you are reading enacts the third model — public statement of how we know what we claim to know.

UK PHIA Probability Yardstick (UK Defence Intelligence)

The Professional Head of Intelligence Assessment publishes a seven-band probability yardstick — Remote chance (under 5%) / Highly unlikely (10-20%) / Unlikely (25-35%) / Realistic possibility (40-50%) / Likely (55-75%) / Highly likely (80-90%) / Almost certain (over 95%). Every probabilistic claim in a MentionFox Journalist Vetting Report — accuracy track-record claims, COI inferences, response-rate predictions, ideology-footprint assessments — is expressed using these seven bands, paired with a separate analytical-confidence rating (High / Moderate / Low).

The New Yorker Fact-Checking Standard

The single most demanding fact-checking practice in American journalism. Three components we adopt verbatim for the accuracy and conflicts-of-interest sections:

The New Yorker maintains an internal multi-week fact-checking process per long-form piece that we don't replicate in real time — a 200-credit report is generated in 3-5 minutes against an automated pipeline, not by a human checker. What we do replicate is the discipline: every claim cited, every primary source preferred, every audit trail preserved.

The Twelve Sections of a Journalist Vetting Report

#SectionPurpose
1Executive SummaryOne opening sentence on the journalist, three "why pitch them" bullets, three "what to know before pitching" bullets, one headline recommendation. Built last from the eleven other sections so the verdict reflects the evidence, not the order it was researched.
2Beat & Expertise AssessmentScore out of 100 with four sub-scores: depth in beat, time on beat, volume on beat, domain credibility. Each sub-score carries an inline cited article URL.
3Outlet Trajectory & HistoryChronological list of staff and freelance affiliations with dates, current status, trajectory shape (single-outlet veteran / freelance rotator / staff-to-freelance / outlet-climber).
4Accuracy & Corrections Track RecordIFCN-style accuracy review. Documented corrections, retractions, peer fact-checker callouts. Default state when none surface: "No public corrections or retractions identified after extensive search of standard accuracy surfaces as of [date]."
5Sources & Approach PatternInvestigative vs beat vs explainer vs opinion vs feature classification. Named-source vs anonymous-source pattern. Signature techniques (data-driven vs interview-driven, document-driven vs source-driven).
6Conflicts of InterestFinancial holdings, sponsorship history, paid speaking, family connections to subjects, social proximity to recurring sources, board roles, prior employment overlaps. PHIA-graded probability of undisclosed COI when inference is required.
7Style & Ideology FootprintVerifiable bias signals from word choice, framing patterns, sourcing patterns, follow / follower graph. Descriptive observation, not judgment. Banned: "left-leaning" / "right-leaning" / "biased" without specific evidence.
8Public Reception SignalsX / Twitter follower + engagement, peer journalist endorsements, awards, conference invitations, podcast guest appearances on industry-standard shows.
9Engagement PatternsPublic reply behaviour, hostile-vs-receptive-to-PR posture, response-time signals from public posts, cold-pitch acceptance pattern.
10Pitch Readiness AssessmentBest pitch angles for this journalist, formats they prefer, what to AVOID, time-of-day patterns from their post timestamps. Three concrete pitch ideas.
11Red Flags & Reputation RiskSeverity-ranked findings — lawsuits (defamation, employment, copyright), industry-disciplinary action, fabrication / plagiarism allegations, public-controversy involvement, deplatforming events. Honest "no flags found" if true.
12References & Source CitationsAggregated audit trail of every URL cited above, deduplicated, grouped by source class (Primary / Authoritative-Secondary / Aggregator / Unverified) per ICD 206 sourcing standards.

The New Yorker Fact-Checking Standard — How We Apply It

The New Yorker has, for decades, run the most demanding pre-publication fact-checking process in American journalism. Magazine-staff fact-checkers verify every assertion — names, dates, quotes, characterisations — against primary sources, often calling subjects directly to confirm details. The standard is "if the magazine prints it, the magazine has checked it."

A 200-credit Journalist Vetting Report cannot replicate the multi-week human checker process. What it can — and does — replicate are three structural disciplines:

  1. Claim-by-claim verification at write time. Sonnet section prompts require ≥3 cited URLs per section. The "[insufficient public evidence as of date]" fallback is mandatory when evidence is thin — we do not let the model fabricate to fill space. A typical Vetting Report ships with 40-80 unique citations across sections.
  2. Primary-source preference. Where the journalist's own published article is the basis for a claim about their beat, we cite the article URL directly. The reader can re-verify the underlying claim themselves without running our pipeline. ICD 206 source classification (Primary / Authoritative-Secondary / Aggregator / Unverified) appears on every citation badge so the reader sees source quality at a glance.
  3. Audit trail preserved. Section 12 deduplicates and groups every URL referenced. A New Yorker checker's evidence binder is private; our equivalent is the published reference list. Total citation count + class distribution is rendered as a summary line: "47 citations: 31 Primary, 12 Authoritative-Secondary, 3 Aggregator, 1 Unverified."

Where we explicitly fall short of the New Yorker standard: we do not call the journalist directly to verify claims, we do not call subjects of their articles to verify quotes, and we do not have weeks for cross-checking. We are honest about this — see Section 6 (Honest Limits) below.

Supporting Frameworks

Beyond the four primary frameworks, three additional standards inform Journalist Vetting Reports. We cite them so readers familiar with each can audit our adherence.

IFCN Code of Principles

The International Fact-Checking Network at Poynter publishes a Code of Principles signatory standard covering nonpartisanship, transparency of sources, transparency of funding, methodology, and corrections. Our Section 4 (Accuracy & Corrections Track Record) directly draws on the IFCN signatory criteria when assessing the journalist's own track record. We are not an IFCN signatory ourselves — we are a research tool, not a fact-checking outlet — but we apply the same evaluation criteria to our subjects.

Reuters Trust Principles

Reuters' Trust Principles, established in 1941 and amended periodically, codify integrity, independence, and freedom from bias. They underpin Reuters' editorial standards and inform our Section 7 (Style & Ideology Footprint) — specifically the requirement that ideology assessment be descriptive (specific cited observations) rather than judgment ("biased toward X" without evidence).

BBC Editorial Guidelines

The BBC publishes the most comprehensive public editorial guidelines of any major news organisation, covering accuracy, impartiality, fairness, privacy, harm and offence, conflict of interest, and corrections. Our corrections policy (Section 7 below) is modeled on the BBC standard: errors corrected publicly, redline preserved, subject right of reply. The BBC's guidance on conflicts of interest informs Section 6 of every Vetting Report.

Honest Limits — what we do not do

What we DO do that competitors do not

  • Synthesis-tier output: 12-section narrative Due Diligence report with cited evidence and verdicts, not a database export with empty fields.
  • Public methodology: this page. Our exact frameworks are auditable by anyone, including by competitors and editors. We treat that as a feature.
  • Asymmetric pricing: 50 credits (about $20) for a full vetting report. Buyers of comparable depth at PR-research vendors typically pay 5x to 50x.
  • Adopted intelligence-community + journalism + media-industry frameworks (ICD 203, ICD 206, UK PHIA Yardstick, KSJ, New Yorker standard, IFCN, Reuters Trust, BBC Editorial Guidelines) in writing, openly.

What we DO NOT do

  • We do not call the journalist to verify claims, or call subjects of their articles for confirmation. The New Yorker checker process is multi-week human work; our 3-5 minute pipeline is structurally different.
  • We do not access a journalist's private financial records, undisclosed sponsorships, sealed legal records, or personal communications. Conflicts of Interest section reflects only the public record.
  • We do not query paywalled investigative archives that require institutional credentials we do not hold (Pitchbook, NewsBank, LexisNexis full-text).
  • We do not characterise journalists as "biased" / "partisan" / "hack" / similar without specific cited evidence. Style & Ideology Footprint is descriptive observation, not judgment.
  • We do not invent claims to fill thin sections. Where evidence is genuinely absent, the report writes "[insufficient public evidence as of date]" and moves on.
  • We do not promise predictive accuracy on response rates, pitch acceptance, or future story choices. A Vetting Report is a synthesis of the public record at the moment of generation. The journalist's behaviour remains theirs.

Corrections Policy

Modeled on the BBC editorial corrections process. Three commitments:

  1. Identification window. Errors flagged within thirty days of report generation are corrected on the canonical view URL within five business days. Errors flagged after thirty days are evaluated and corrected at our discretion.
  2. Re-publication, not silent edit. Corrections do not overwrite the prior text silently. The report's view page preserves a redline diff between the original and corrected text, time-stamped, with a one-line explanation. Any reader who saw the original can audit the change.
  3. Subject right of reply. The journalist named in any Vetting Report may submit a one-paragraph factual rebuttal to corrections@mentionfox.com. Verifiable rebuttals attach to the report alongside the original section. Where the journalist and our research disagree on a public-record claim, both views are surfaced; we do not silently capitulate, and we do not refuse to publish the journalist's view.

Data integrity floor — ALCOA. Every Journalist Vetting Report carries an ALCOA Methodology footer: each factual claim is Attributable to a cited source, presented in Legible plain language, marked with the date it was Contemporaneously verified, sourced from the Original primary record where available, and Accurately reflects the underlying evidence with explicit uncertainty flags where evidence is thin. ALCOA is the U.S. Food and Drug Administration's data-integrity principle for regulated industries; we adopt it as the floor for synthesis research because it captures the same disciplines without the regulatory overhang.

References

Primary documents referenced throughout this methodology declaration. All publicly available; we encourage readers to read them in the original.

  1. ICD 203 — Analytic Standards — Office of the Director of National Intelligence (2015).
  2. ICD 206 — Sourcing Requirements for Disseminated Analytic Products — Office of the Director of National Intelligence.
  3. KSJ Science Editing Handbook — Knight Science Journalism, MIT.
  4. UK Professional Head of Intelligence Assessment — Probability Yardstick — UK Government / Cabinet Office.
  5. The New Yorker Fact-Checking Department — public-facing description. (The internal manual is not published; we cite The New Yorker's public statements about fact-checking practice.)
  6. IFCN Code of Principles — International Fact-Checking Network at Poynter.
  7. Reuters Trust Principles — Reuters / Thomson Reuters Founders Share Company.
  8. BBC Editorial Guidelines — British Broadcasting Corporation.
  9. FDA Data Integrity and Compliance With Drug CGMP — ALCOA principles — U.S. Food and Drug Administration.

Methodology v1.0 · Published 2026-05-03 · Verifierce / MentionFox · Facet 1 of the Journalism SuiteFounder methodology → Investor methodology →