Founder Vetting Report · Methodology
Methodology Declaration

Founder Vetting Report

How a 200-credit Founder Vetting Report is produced. The published standards we adhere to, the limits we will not pretend to overcome, and the corrections process if we get something wrong.

Overview

A Founder Vetting Report is a paginated, twelve-section due-diligence document covering one founder. It is generated on demand from public OSINT plus the subject's own enriched profile, takes three to five minutes to produce, costs 200 credits (about $80 USD at the standard credit pack rate), and is delivered as a shareable HTML report with a printable PDF view.

It is intended for an investment partner, a corporate development lead, an acquirer, an executive recruiter, or a journalist who needs an institutional-grade synthesis on a single founder before a meeting, an investment, or a story.

The report is not a credit report, not a background check, and not an FCRA-regulated consumer report. It does not access sealed records, paywalled investigative archives, or non-public databases. Everything that appears in the report is recoverable by any sufficiently patient researcher with public-internet access; what we sell is the synthesis, the cited audit trail, the framework adherence declared on this page, and the speed of delivery.

Three sentences worth committing to memory. Every claim cites a public URL or is flagged as insufficient evidence. Every probability is expressed in the seven-band UK PHIA Yardstick vocabulary plus an analytical confidence rating. Every error identified within thirty days of publication is corrected, re-published, and the redline is preserved.

The Three Frameworks We Adopt

MentionFox / Verifierce reports align with three published standards. They were not written for venture due diligence — they were written for the U.S. intelligence community, professional fact-checkers, and the United Kingdom's Defence Intelligence agency — but they translate cleanly to investment-grade research, and adopting them publicly is the single most credible thing a research vendor can do.

ICD 203 — Analytic Standards (Office of the Director of National Intelligence)

The U.S. Intelligence Community's Directive 203 defines nine tradecraft standards that any piece of finished intelligence is required to meet: properly described sources, proper expression of uncertainty, distinction between intelligence and assumptions, incorporation of alternative analysis, judgement of consequences, customer-relevant focus, logical argumentation, accurate reflection of source content, and clear language. We treat these as binding for every Vetting Report we produce. The full list is enumerated in section four below.

KSJ Three Models of Fact-Checking (Knight Science Journalism, MIT)

Deborah Blum's KSJ Fact-Checking Project at MIT documents three distinct working models: pre-publication line-by-line verification (the New Yorker model), post-publication rapid response (the Washington Post Fact Checker model), and structured public-knowledge correction (the BBC editorial model). A Vetting Report borrows from all three. Sonnet drafts each section under cited-evidence constraints (pre-publication discipline). The Corrections section below describes our post-publication response. And the methodology page you are reading enacts the third model — public statement of how we know what we claim to know.

UK PHIA Probability Yardstick (UK Defence Intelligence)

The Professional Head of Intelligence Assessment publishes a seven-band probability yardstick used across UK government intelligence reporting since 2018. It maps natural-language probability words ("highly unlikely", "realistic possibility", "almost certain") to numeric bands (under 5%, 10-20%, 25-35%, 40-50%, 55-75%, 80-90%, over 95%). The yardstick eliminates the well-documented problem that an analyst writing "may" and a reader interpreting "may" can mean wildly different things. Every probabilistic claim in a MentionFox Vetting Report is expressed using the seven yardstick terms, paired with a separate analytical-confidence rating (High / Moderate / Low) per ICD 203.

D&O Underwriter Risk Framework

The Directors & Officers liability insurance industry maintains a public-record framework for underwriting officer-level risk: executive compensation versus industry norms, related-party transactions, insider stock-sale patterns, prior litigation exposure, and material weakness disclosures. We adopt that framework verbatim in Section 6.5 of every Founder Vetting Report. A founder generates a HIGH / MODERATE / LOW D&O Risk Flag from the public record before any private indemnification structure is brought into diligence. For private bootstrapped founders most public-disclosure inputs return [insufficient public evidence]; that null result is itself a structural finding (no SEC obligation, no public exposure, lower D&O premium-equivalent risk on the public-record axis), not a gap.

The Twelve Sections of a Founder Vetting Report

#SectionPurpose
1Executive SummaryOne opening sentence on the founder, three "what's good" bullets, three "worth digging into" bullets, one headline recommendation. Built last from the eleven other sections so the verdict reflects the evidence, not the order it was researched.
2Founder-Market FitScore out of 100 with four sub-scores: domain expertise, public thought leadership, pre-company side projects, network density. Each sub-score carries inline cited URLs.
3Exit HistoryOne row per prior venture: founded date, current status, outcome class (acquired, IPO, shut down, still operating, none), survival horizon, retrospective writing if any.
4Career Arc & TrajectoryTrajectory pattern named explicitly (linear domain expert, lateral pivoter, outsider, pivoter), with phase-by-phase narrative against documented employment dates.
5Co-founder & TeamCo-founder identification, prior collaboration history, complementarity assessment, team-spawn signals if applicable.
6Cap Table ReconstructionPublic-record cap table assembly from press-released round details, regulatory filings where available, and dilution math. Explicit caveat where rounds are undisclosed.
6.5Executive Compensation & Related-Party TransactionsD&O underwriter framework applied to the public record: compensation versus industry norms, related-party transactions, insider stock-sale patterns, and a HIGH / MODERATE / LOW D&O Risk Flag. For private founders most fields return [insufficient public evidence] by design.
7Burn & Runway InferenceBurn rate inferred from headcount disclosures, hiring-page activity, last round size, and elapsed time. Honest range, not point estimate.
8Comparable FoundersFive nearest neighbors from the comparable-founders reference table (currently 70+ rows spanning VC-scale and indie-bootstrapped trajectories), each with archetype-overlap explanation and material-difference flags.
9Public Reputation & Red FlagsRed flags only where public evidence supports them. The default state is "no public flags found in extensive search as of [date]" — we do not invent flags to look thorough.
10Network & Warm Intro PathsMutual-connection mapping based on co-investors, shared employer history, podcast-host overlap, conference-circuit overlap, and recurring co-author signals.
11Pitch ReadinessOperational diligence questions a partner would ask in a first meeting, mapped against what public data already answers and what the founder will need to bring.
12References & Source CitationsAggregated audit trail of every URL cited above, deduplicated, grouped by source class (Primary, Authoritative-Secondary, Aggregator, Unverified) per ICD 206 sourcing standards.

The Nine ICD 203 Tradecraft Standards

Each Sonnet section prompt is constructed against these nine standards. Compliance is enforced at draft time (banned-words list, cited-URL minimums, mandatory uncertainty-flag phrasing) and reviewed at the executive-summary synthesis pass.

#StandardHow a Vetting Report applies it
1Properly describes quality and credibility of underlying sourcesEvery URL in section twelve is tagged with a Source Class label (Primary, Authoritative-Secondary, Aggregator, Unverified). Inline citations carry the same badges. Reader can audit reliance distribution at a glance.
2Properly expresses and explains uncertaintiesProbabilistic claims use the seven UK PHIA bands. Each is paired with a High / Moderate / Low confidence rating that explains the basis for the confidence judgement.
3Properly distinguishes between underlying intelligence and analyst's assumptions and judgementsSections separate "the public record shows" claims (cited inline) from "we infer" claims (introduced with "we assess", "the pattern suggests", or similar). Cap table reconstructions and burn inferences are explicitly labeled as inference, not disclosure.
4Incorporates analysis of alternativesFounder-Market Fit and Trajectory sections name the alternative trajectory pattern they considered and rejected. Red Flags section asks whether absence of evidence is evidence of absence and answers honestly.
5Demonstrates customer relevance and addresses implicationsEach section opens with the partner-relevant question it answers. Executive Summary closes with a Headline Recommendation directly callable in a partner meeting.
6Uses clear and logical argumentationSections progress from cited evidence through analysis to verdict. No hidden inferential leaps. Banned filler words: "brilliant", "visionary", "controversial", "may be", "could be", "perhaps", "possibly".
7Explains change to or consistency of analytic judgementsWhere a re-generation of the same subject changes a numeric score, the diff is preserved on the report's view page (re-generation history). Methodology change-log on this page records framework updates.
8Makes accurate judgements and assessmentsAccuracy floor is enforced by the banned-invention rule: where evidence is thin, the report writes "[insufficient public evidence as of date]" instead of fabricating. Any reader can audit the cited-URL trail in section twelve.
9Incorporates effective visual information where appropriateSub-score tables, tier-grid breakdowns, comparable-founder cards, cap-table tables, and source-class reference tables. Print stylesheet preserves tables for the PDF view.

UK PHIA Probability Yardstick

Verbatim from the UK Professional Head of Intelligence Assessment guidance. Used in MentionFox Vetting Reports anywhere the underlying claim is probabilistic rather than asserted.

Yardstick bandNumeric rangeUsed in a Vetting Report when
Remote chanceUnder 5%The claim is conceivable but no public evidence supports it.
Highly unlikely10 - 20%Public evidence weakly contradicts the claim or strong base rates contradict it.
Unlikely25 - 35%Public evidence is mixed and tilts away from the claim.
Realistic possibility40 - 50%Public evidence is mixed, neither tilt is decisive, and the claim is operationally relevant.
Likely / probably55 - 75%Public evidence supports the claim with one or more corroborating sources.
Highly likely80 - 90%Multiple authoritative sources independently support the claim and no countervailing public evidence surfaces.
Almost certainOver 95%Primary-source evidence is on the record and uncontested.

Every PHIA-band claim is paired with an Analytical Confidence rating expressed separately:

Example phrasing as it appears in a real report: "Highly likely (~85%) the founder retains over 50% equity post-Series-B (Confidence: Moderate — based on standard dilution math plus partial round disclosures)." The phrasing is mandatory; vague hedges ("may", "could", "perhaps", "possibly") are banned in the Sonnet prompts.

Source Class Taxonomy

Per ICD 206, every citation declares its source class. Four classes, ordered by reliability:

ClassDefinitionExamples
Primary The subject's own writing on a domain they own; official government records; regulatory filings; court records; primary academic sources. Subject's personal site, X / Twitter at exact-match handle, sec.gov, federalregister.gov, court records, journal DOI links, wikidata.org subject pages.
Authoritative-Secondary Reputable institutional secondary sources with editorial standards, named authors, and a public corrections process. Wikipedia, Reuters, Associated Press, New York Times, Wall Street Journal, Financial Times, BBC, NPR, ProPublica, Bloomberg, The Atlantic, Crunchbase company pages, LinkedIn rate-limited verified profiles.
Aggregator Useful structured data with weaker editorial review, often crowd-sourced, often sales-funnelled. G2, Capterra, SimilarWeb, BuiltWith, Glassdoor, Indeed, ProductHunt rankings, GitHub trending pages.
Unverified Forum posts, anonymous comments, third-party blogs without named authorship, content on platforms with no editorial review. Reddit, Stack Overflow comments, Medium posts not authored by the subject, Substack posts not authored by the subject, generic blog hosts.

Section twelve of every Vetting Report opens with a class-distribution summary of the form "47 citations: 18 Primary, 22 Authoritative-Secondary, 5 Aggregator, 2 Unverified." A report with too few Primary citations or too many Unverified ones is a self-disclosing weakness; the reader can decide what weight to put on it.

Honest Limits — what we do not do

What we DO do that competitors do not

  • Synthesis-tier output: 12-section narrative report with cited evidence and verdicts, not a database export with empty fields.
  • Public methodology: this page. Our exact frameworks are auditable by anyone, including by competitors. We treat that as a feature.
  • Asymmetric pricing: 50 credits (about $20) for a full vetting report. Buyers of comparable depth typically pay 5x to 50x at the institutional research vendors.
  • Adopted intelligence-community frameworks (ICD 203, ICD 206, UK PHIA Yardstick) and pharmaceutical-grade data integrity (ALCOA), in writing, openly.

What we DO NOT do

  • We do not access sealed court records, juvenile records, sealed adoption records, or non-public regulatory filings.
  • We do not call individuals, send pretextual outreach, or solicit private disclosures from the subject's network.
  • We do not query paywalled investigative archives that require institutional credentials we do not hold.
  • We do not conduct FCRA-regulated consumer credit reporting. A Vetting Report is not a credit report and may not be used for employment, housing, or credit decisions covered by the Fair Credit Reporting Act.
  • We do not invent claims to fill thin sections. Where evidence is genuinely absent, the report writes "[insufficient public evidence as of date]" and moves on.
  • We do not promise predictive accuracy. A Vetting Report is a synthesis of the public record at the moment of generation. Investment decisions remain the partner's responsibility.

Corrections Policy

Modeled on the BBC editorial corrections process. Three commitments:

  1. Identification window. Errors flagged within thirty days of report generation are corrected on the canonical view URL within five business days. Errors flagged after thirty days are evaluated and corrected at our discretion.
  2. Re-publication, not silent edit. Corrections do not overwrite the prior text silently. The report's view page preserves a redline diff between the original draft and the corrected draft, time-stamped, with a one-line explanation of the correction. Any reader who saw the original can audit the change.
  3. Subject right of reply. The subject of any Vetting Report may submit a one-paragraph factual rebuttal to corrections@mentionfox.com. Verifiable rebuttals attach to the report alongside the original section. Where the subject and our research disagree on a public-record claim, both views are surfaced; we do not silently capitulate, and we do not refuse to publish the subject's view.

Data integrity floor — ALCOA. Every Vetting Report carries an ALCOA Methodology footer at the end: each factual claim is Attributable to a cited source, presented in Legible plain language, marked with the date it was Contemporaneously verified, sourced from the Original primary record where available, and Accurately reflects the underlying evidence with explicit uncertainty flags where evidence is thin. ALCOA is the U.S. Food and Drug Administration's data-integrity principle for regulated industries; we adopt it as the floor for synthesis research because it captures the same disciplines without the regulatory overhang.

References

Primary documents referenced throughout this methodology declaration. All publicly available; we encourage readers to read them in the original.

  1. ICD 203 — Analytic Standards — Office of the Director of National Intelligence (2015).
  2. ICD 206 — Sourcing Requirements for Disseminated Analytic Products — Office of the Director of National Intelligence.
  3. KSJ Science Editing Handbook — Knight Science Journalism, MIT.
  4. UK Professional Head of Intelligence Assessment — Probability Yardstick — UK Government / Cabinet Office.
  5. FDA Data Integrity and Compliance With Drug CGMP — ALCOA principles — U.S. Food and Drug Administration.
  6. BBC Editorial Guidelines — Accuracy and Corrections — British Broadcasting Corporation.
  7. Fair Credit Reporting Act (FCRA) overview — Federal Reserve History project. (Cited as the regulated-reporting regime a Vetting Report is explicitly NOT.)

Methodology v1.0 · Published 2026-05-03 · Verifierce / MentionFoxInvestor Vetting Report methodology →