Use disclaimer: Verification Reports are research synthesis for editorial / legal / academic review. They are not legal advice, not medical advice, and not financial advice. Final fact-checking and publication decisions remain with the editorial / legal / academic team.
Verification Vetter · Methodology
Methodology Declaration

Verification Vetter

How a Verification Report is produced. The KSJ three-tier model we adopt, the per-doc-type frameworks (Bluebook for legal, USPTO for patents, MLA/APA/Chicago/AMA for academic), the citation-finder logic, and the corrections process if we get something wrong.

Overview

A Verification Report is a per-claim fact-check of a document. The user pastes article text or supplies a URL; we extract every factual claim that requires citation, search for evidence per claim, render a verdict (verified / close match / contested / unverified / missing citation / broken citation link), and surface citation suggestions in the user's chosen style (APA / MLA / Chicago / Bluebook / AMA). Built for journalists, editors, book publishers, legal partners, patent attorneys, academic editors.

This product vets the CONTENT of a document. It is the companion to the Source Vetter family, which vets the PERSON being quoted (their credibility, expertise, conflicts). A piece can fail Source Vetter (the source isn't credible) yet pass Verification Vetter (the claims are accurate), or vice versa — both passes are independent.

Pre-publication scope only. This is a pipeline tool: you run it before you ship. It does not retroactively fact-check public articles unless you supply the URL or text. We do not act as a court of record; we surface evidence, render verdicts under PHIA-graded confidence, and let the editorial / legal / academic team make the final call.

The Frameworks We Adopt

KSJ Three-Model Fact-Checking Framework

The Knight Science Journalism program at MIT defines three operating models for fact-checking depth: Newspaper (deadline-driven, single-pass, AI-augmentable), Hybrid (AI-first with human review of flagged claims), and Magazine (full primary-source-callback, human-led, every claim verified pre-publication). Our tier structure maps directly to this taxonomy. See Three Tiers for per-tier detail.

ICD 203 — Analytic Standards (Office of the Director of National Intelligence)

The U.S. Intelligence Community’s Directive 203 defines nine tradecraft standards that any analytic product must meet. We treat these as binding for every Verification Report — particularly: properly described sources, proper expression of uncertainty, distinction between assumption and judgment, alternative analysis where relevant.

UK PHIA Probability Yardstick (UK Defence Intelligence)

Every probabilistic verdict on a claim is paired with a PHIA confidence band (Low / Moderate / High). When the evidence is mixed, the verdict_explanation expresses the uncertainty using the seven-band yardstick (remote chance / highly unlikely / unlikely / realistic possibility / likely / highly likely / almost certain) rather than collapsing it into a binary verified/unverified.

ALCOA Data Integrity (FDA)

Every verification record is Attributable (per-user audit), Legible (clear text), Contemporaneous (timestamp captured), Original (cited URL retained), and Accurate (verdict explanation cites specific evidence indices). The corrections protocol below extends this with re-publication semantics rather than silent edits.

Source Class Tagging (ICD 206)

Every cited evidence URL is classified by source class (Primary / Authoritative Secondary / Tier-1 Press / Trade Press / Other) so the editorial reader can audit which claims rest on primary evidence versus journalistic synthesis.

Citation-Style Standards

Suggested citations are formatted in the user's chosen style: APA 7th, MLA 9th, Chicago 17th (Author-Date OR Notes-Bibliography), Bluebook 21st (legal), AMA 11th (medical). Format selection happens at job submit time; the same suggested citation is rendered in every requested style.

Three Tiers — Newspaper / Hybrid / Magazine

Newspaper Tier

5 credits / page (~$2 / page)
  • Sonnet 4.6 single-pass claim extraction
  • Per-claim Serper evidence retrieval
  • AI verdict + confidence + evidence sources
  • Citation-style formatting in selected styles
  • Existing-citation URL liveness check
  • Best for daily-deadline editorial review, blog posts, short articles, internal memos

Hybrid Tier

25 credits / page (~$10 / page)
  • Everything in Newspaper
  • Low-confidence and contested claims flagged for human review queue
  • Best for weekly magazine fact-checking, longer features, science journalism
  • Human-review SLA: 48-72 hours typical

Magazine Tier

100 credits / page (~$40 / page)
  • Everything in Hybrid
  • Big-picture coherence pass (tonal consistency, structural argument check)
  • Author-callback recommendations (questions for the author to resolve)
  • Every claim flagged for human pre-publication verification
  • Best for book publishers, longform investigative, legal briefs, patents

Reference benchmark. The New Yorker maintains an internal multi-week fact-checking process per long-form piece — staff fact-checkers verify every assertion against primary sources, often calling subjects directly to confirm. Magazine tier replicates the discipline and cost structure of that workflow. Newspaper tier replicates the speed of a daily-paper editorial review at substantially lower cost.

Claim Verification — How It Works

  1. Claim extraction. Sonnet 4.6 reads the document and extracts up to 80 claims that meet the "requires citation" test: factual statements (date / number / name / event / quote), scientific claims (research finding / statistical result / consensus position), historical claims, legal / regulatory claims, attributions ("X said Y"). Opinion, interpretation, definitional statements, common knowledge, and creative passages are explicitly excluded — they appear in the report tagged "not_factual" rather than verified.
  2. Existing-citation extraction. If the document already cites a footnote / endnote / parenthetical / hyperlink, we capture that anchor alongside the claim. Existing-citation URLs are then HEAD-checked for liveness.
  3. Per-claim evidence search. Each claim becomes a Serper search query. We retrieve up to six organic results per query, prioritising primary sources (government domains, peer-reviewed journals, official organisation sites) over aggregator content.
  4. Per-claim verdict. Sonnet evaluates the claim against the retrieved evidence, returning one of seven verdicts: verified / close_match / contested / unverified / missing_citation / broken_citation_link / not_factual. Each verdict is paired with a Low / Moderate / High confidence band and a 1-3-sentence explanation citing evidence indices.
  5. Suggested citation. When the document was missing a citation OR the existing citation link is broken, the verdict prompt also returns the best primary-source URL we found. The URL is then rendered as a formatted citation in every requested style.
  6. Banned: invented citations. The verifier prompt forbids invented source URLs and fabricated references. A citation suggestion either comes from the evidence retrieved or from a canonical primary source the model can verify (government domain, peer-reviewed journal, official organisation site). When no good source is found, suggested_citation_url returns null.

Citation Finder — The Specifics

Citation Finder is the layer that converts verdicts into actionable references. Three things happen for every claim that requires a citation:

  1. Source identification. The verifier identifies the best primary source for the claim from retrieved evidence (preferring government domains, peer-reviewed journals, official organisation publications, primary news outlets) or from canonical sources in training data when retrieved evidence is thin.
  2. URL liveness verification. Both existing citation URLs (from the document) and newly-suggested citation URLs are HEAD-checked. URLs returning non-2xx responses are flagged as broken_citation_link.
  3. Multi-style formatting. The same suggested citation is rendered in every style the user selected at submit time. Standards adopted:
    • APA 7th — American Psychological Association, default for psychology / social sciences.
    • MLA 9th — Modern Language Association, default for humanities.
    • Chicago 17th — University of Chicago, both Author-Date and Notes-Bibliography forms supported. Default for history.
    • Bluebook 21st — Harvard Law Review Association, mandatory for U.S. legal briefs.
    • AMA 11th — American Medical Association, mandatory for medical / clinical journals.

Output shape. Each claim returns suggested_citation_styles as a JSON object keyed by style name. The viewer renders the user's first-selected style by default and offers a one-click expand to see all selected styles for that claim.

Per-Doc-Type Frameworks

Doc typeFramework adoptedWhat changes in the prompt
BookGeneral journalism + author-pedigree contextStandard claim extraction; chapter-level coherence in Magazine tier.
Magazine longreadKSJ Magazine framingStandard extraction; author-callback recommendations in Magazine tier.
Legal briefBluebook 21st citation; case-law primacyEvery cited authority must trace to a specific case / statute / regulation. Bluebook citations in Citation Finder output.
PatentUSPTO examiner methodologyEvery prior-art reference must trace to a specific publication / patent number. Examiner-style novelty + non-obviousness check available in Magazine tier.
Academic paperMLA / APA / Chicago / AMAEvery empirical claim must trace to a primary peer-reviewed source. DOI preference for citation suggestions.
TextbookAcademic-standard framingEvery factual claim must trace to a primary source; pedagogical clarity not separately scored.
ContractRepresentations + warranties focusFlags factual representations / warranties for verification (party truth-claims separated from pure legal terms).
ScreenplayHistorical / scientific accuracy on claimed-fact passagesFlags claims presented as fact; respects creative liberty for clearly fictional passages.
OtherGeneral journalism standardsDefault extraction.

Data Sources — What We Use

What we do NOT use: paywalled databases (Westlaw, LexisNexis, JSTOR, ProQuest), subscription academic indexes (Google Scholar full text), or proprietary fact-checking services. All evidence sources are publicly accessible at the time of report generation.

Honest Limits

What we DO do

What we DO NOT do

Corrections Policy

Three commitments modeled on the BBC editorial corrections process:

  1. Identification window. Errors flagged within thirty days of report generation are corrected on the canonical view URL within five business days.
  2. Re-publication, not silent edit. Corrections preserve a redline diff between the original and corrected verdict, time-stamped, with a one-line explanation.
  3. Subject right of reply. The author whose document was verified may submit a one-paragraph factual rebuttal to corrections@mentionfox.com. Verifiable rebuttals attach to the report alongside the original verdict.

Data integrity floor — ALCOA. Every Verification Report carries an ALCOA Methodology footer (Attributable / Legible / Contemporaneously / Original / Accurately) — the FDA's data-integrity framework, applied to per-claim analytic products.

References

  1. Knight Science Journalism Program at MIT — Three-Model Fact-Checking framework reference.
  2. ICD 203 — Analytic Standards — Office of the Director of National Intelligence (2015).
  3. ICD 206 — Sourcing Requirements for Disseminated Analytic Products.
  4. UK PHIA Probability Yardstick.
  5. FDA Data Integrity and Compliance With Drug CGMP — ALCOA principles.
  6. APA Style 7th — American Psychological Association.
  7. MLA Handbook 9th — Modern Language Association.
  8. Chicago Manual of Style 17th.
  9. Bluebook 21st — Uniform System of Citation.
  10. AMA Manual of Style 11th.
  11. USPTO Manual of Patent Examining Procedure.

Methodology v1.0 · Published 2026-05-04 · Verifierce / MentionFox · Verification Vertical foundationSource / Journalist methodology →