The applicant who built a startup. The dean candidate who chaired three boards. Verify before you offer.
A high-school senior whose Common App essay is built around a startup she founded at sixteen. A graduate-program applicant who lists three peer-reviewed papers and a stint at a national lab. A dean-search finalist whose CV claims six board chairs and a turnaround in the prior provostship. Every claim has a public-record shape. We assemble that shape into a report your reading committee can finish in five minutes.
Why this is harder than it should be
Modern applications lean hard on entrepreneurship, research, social impact, and shipped product. A high-school senior who launched an app, a graduate-program applicant with co-author credit on a paper, a dean-search finalist who chaired three nonprofit boards — these claims are now the spine of the personal narrative. The reading committee has minutes per file. Verification is not built into the workflow. Two patterns recur: the applicant who substantially overstates a small contribution, and the applicant who modestly understates real and verifiable work. Both patterns are unfair to the applicant pool the committee is trying to serve, because both pull the committee away from accurate evaluation.
For dean searches and senior-faculty hires, the stakes are higher. The candidate has an established public record across two or three decades — published research, prior leadership roles, governance positions, public talks, possibly litigation history. A search committee that does not assemble that record fully is operating with fragmentary information at the moment a single decision binds the institution for five to ten years. Search firms do some of this work. Background-check vendors do a different slice. Neither produces a single coherent reading-committee-ready document with cited sources and a defamation-aware framing.
The harder problem is consistency. Two readers looking at the same applicant should not arrive at different factual baselines. When verification is ad-hoc — one reader runs a Crunchbase search, another reads the company's website, a third asks a colleague — the baselines diverge. A standardized public-record report eliminates that drift. The committee debates the meaning of the record, not whether the record exists.
Then there is the recency question. A candidate who looked clean a year ago when they applied may have a new public-record surface today. A founder claim cited in the application may have unwound between submission and offer. A dean candidate may have entered a new public dispute since the search firm produced its initial brief. The report is dated. The reading committee knows when the picture was assembled.
What MentionFox brings to admissions and dean-search work
The Founder Vetter handles applicant claims about startups, shipped products, and entrepreneurial work. The Executive Vetter handles dean-search finalists and senior-faculty candidates. The Academic Den is the workspace for institutions running these workflows at volume. Methodology pages explain every source and every confidence statement.
Founder Vetter
The flagship report for any applicant or candidate who claims founder, co-founder, or startup-builder status. Cross-checks Crunchbase, AngelList, ProductHunt, GitHub, LinkedIn, the press surface, and the patent record against the application narrative. Snapshot tier returns in roughly two minutes with the verified company-of-record, role, founding date, and headline-traction signal. Full report covers cap-table presence where public, technical contribution where public, prior co-founder relationships, and prior litigation surface. Tuned for the case where the applicant is the central claim, not a peripheral one.
Executive Vetter
Built for dean-search finalists, provost candidates, department chairs, and senior tenure-track faculty hires. Twelve sections in the full report: prior leadership history with reporting-line traceback, publication and grant record via OpenAlex and NIH RePORTER, governance roles via SEC and IRS Form 990 filings, prior employer disciplinary surface where public, public-conduct record from press archives, board and committee positions, conflict-of-interest exposure, and the same defamation-aware framing that runs through the Founder Vetter.
Founder Methodology
The full source taxonomy and confidence framework behind every Founder Vetting Report. Federal-Primary sources (USPTO, SEC, NIH) carry the highest weight. Authoritative-Secondary sources include Crunchbase, AngelList, ProductHunt, and OpenAlex. Aggregator and forum sources are signal, never verdict. Unverified claims are tagged. The disambiguation hard-gate prevents wrong-person reports. Read this to understand exactly what the report does and does not assert.
Executive Methodology
Companion methodology for the Executive Vetting Report. Source taxonomy extends to SEC EDGAR governance filings, IRS Form 990 board-position records, OpenAlex citation graphs, NIH RePORTER grant histories, AGB association filings, and the academic-publication landscape. Defamation guardrails are explicit. The hard-gate prevents the wrong-Smith report.
Academic Den
The institutional workspace for admissions offices, deans, and provosts who run these workflows at volume. Houses your applicant or candidate pipeline, batches of reports against shared rubric, comparison views across finalists, and a saved-rubric library so two reading committees a year apart use the same evaluation lens. Citation-tier weighting tuned for academic context — top-journal citations weighted above mid-tier, preprint, or blog signal.
Verification Vetter Methodology
The trust spine that runs through every report on the platform. Source-class taxonomy, confidence framework, citation discipline, defamation guardrails, disambiguation hard-gate. If you want to understand the methodology behind every report on every subject before trusting any specific one, this is the document a faculty senate or trustee committee would want on file.
A typical workflow — what an admissions reader actually does
An undergraduate admissions reader at a selective private institution opens a file. The applicant is a high-school senior whose personal essay is built around a tutoring app she founded at age sixteen, with a claim of fifteen thousand active users and three thousand dollars per month in revenue. The reader has six minutes per file. She runs a Founder Snapshot for 30 credits while she reads the essay.
Two minutes later the snapshot returns. The Crunchbase entry exists with a founding date that matches the essay timeline. The ProductHunt launch is dated to the prior summer, with thirty-four upvotes — modest but real. The App Store entry is live with a developer credit that matches the applicant's name. GitHub shows a public repository under her handle with substantive commit history through the build period. The press surface is empty, which is consistent with a small bootstrap project, not a flag. The traction claim of fifteen thousand users is unverifiable in public records, but everything verifiable is consistent. The reader makes a note in her file and continues with the essay.
The next file the reader opens is the same applicant archetype, with a much larger claim — one hundred thousand users, a venture round, two co-founders. This snapshot returns differently. Crunchbase has no entry. The named venture round is not findable in any of the venture databases. The named co-founders' LinkedIn profiles do not list this company. The patent claim is unfindable in USPTO. The file goes to the senior reader's queue with the report attached. The committee meeting will be a different conversation than it would have been without the data.
What the readers did not do: spend twenty minutes per file researching every entrepreneurship claim. The reports did the assembly. The readers did the reading.
What data sources the report draws from
Every claim in a Founder or Executive Vetting Report is anchored to a named, public, verifiable source. The methodology pages list every source class and how it is weighted. For admissions and dean-search work specifically, these are the sources that drive the report.
- Common App essay taxonomy — used as a reading reference for understanding which claim categories appear in which application sections. Not a source for verification; a structure for matching applicant claims to verifiable records.
- National Center for Education Statistics IPEDS — the federal data system on US higher education institutions. Used for institutional context when applicants cite prior coursework, dual-enrollment, or summer-program affiliations at named institutions.
- Crunchbase — venture-funding and company-record database. Verifies named companies, founding dates, funding rounds, founder rosters, and acquisition history.
- AngelList — startup roster, syndicate participation, founder profiles. Cross-references with Crunchbase to surface inconsistencies.
- ProductHunt — public launch surface for shipped consumer and developer products. Verifies launch dates, traction signal, and founder attribution.
- GitHub — public commit history under the applicant's handle. Verifies technical contribution claims for applicants who claim engineering work.
- LinkedIn public surface — career timeline cross-reference. Used to confirm that the applicant or candidate's stated history is consistent with what they have published about themselves.
- USPTO patent and trademark database — federal record of patent applications and grants. Used for applicants and candidates who claim invention or patent authorship.
- NIH RePORTER and PubMed — federal research-funding and publication records. Verifies research-contribution claims and grant-history claims for graduate-program applicants and faculty candidates.
- OpenAlex — open academic-publication graph covering 250 million scholarly works. Citation counts, co-author network, venue tier, and publication-pattern verification for academic candidates.
- SEC EDGAR — federal corporate-disclosure database. Surfaces governance roles, board positions, and ownership stakes that intersect with academic appointments.
- IRS Form 990 records — federal nonprofit-disclosure filings. Verifies board-chair, board-member, and officer claims for candidates who cite nonprofit governance roles.
- Press archive via NewsAPI and Google News — long-tail press surface. Used for both founder traction-claim verification and senior-candidate public-conduct review.
Federal-Primary sources carry the highest weight. Authoritative-Secondary sources include Crunchbase, AngelList, ProductHunt, OpenAlex, and IPEDS. Aggregator and forum sources are signal, not verdict. Unverified claims are tagged, not laundered. The methodology page is the long-form version of this list.
Sample report walkthroughs
The canonical founder sample is Dario Amodei, co-founder and chief executive of Anthropic, prior leadership at OpenAI and Google Brain. The full Founder Vetting Report runs the entire source taxonomy and the executive-history overlay since the subject has both founder and senior-leadership records.
For an executive sample suitable as a reading reference for senior-faculty or dean-search work, see the Brian Chesky Executive Vetting Report. For a self-attested founder report shape, see the Saul Fleischman Founder Vetting Report — useful as a small-company reference contrast to the large-company Amodei sample.
Pricing for this use case
Founder Snapshot
30 credits. Returns in roughly two minutes. Verified company-of-record, founding date, role, headline traction signal, top public-record consistency check. The right tier for first-pass undergraduate admissions reads where the application makes a startup or shipped-product claim and the reader needs a fast consistency check before flagging up.
Founder Vetting Report v2
200 credits. Returns in five to eight minutes. Full source taxonomy across twelve sections. Cap-table presence where public, technical-contribution detail, prior co-founder relationships, prior funding history, public-conduct surface, defamation-aware framing. The right tier for graduate-program admissions, scholarship finalists, and any high-stakes applicant where the entrepreneurship claim is the central narrative.
Executive Vetting Report
200 credits. Returns in five to eight minutes. Tuned for dean-search finalists, provost candidates, department chairs, and senior-faculty hires. Prior leadership history, publication and grant record, governance roles, public-conduct surface, board positions. Use when the search firm brief needs to be supplemented with a public-record diligence layer.
Volume and institutional subscriptions
Credit packs scale to admissions-office volume. Institutional subscriptions provide a per-cycle credit grant tuned to applicant-pool size. Trustee-level reporting available on request. See the full pricing page for credit-pack options and to reach the institutional-subscription page.
Mini case studies
The selective undergraduate program reading entrepreneurship essays
A reading committee at a selective private undergraduate program processes nine thousand applications per cycle. Roughly twelve percent of applicants build their personal narrative around an entrepreneurship claim. The committee buys a credit pack tuned to that volume and runs a Founder Snapshot on every entrepreneurship-claim file. The marginal cost per applicant is small. The marginal benefit is that the second-stage reader sees a consistent verification layer before the long-form essay is read, which sharpens the conversation about narrative quality versus achievement substance.
The dean-search committee at a research university
A search committee for a school dean has narrowed to four finalists from the search firm's brief. Each finalist has a thirty-year public record. The committee runs an Executive Vetting Report on each finalist for 200 credits per report — 800 credits total — over a single afternoon. The reports surface a governance overlap one finalist had not disclosed, an OpenAlex citation pattern that contradicts a CV claim from another, and a clean public record on the remaining two. The committee is no longer choosing between four candidates with the same fragmentary picture; it is choosing between two with full coverage and two with material questions to resolve in next-round interviews.
Frequently asked questions
Why would an admissions office want a public-record vetting report?
Because the modern application essay leans heavily on entrepreneurship, social-impact projects, and shipped products, and the claims often outrun what the applicant actually built. A founder vetting report cross-checks Crunchbase, GitHub, AngelList, ProductHunt, and the press surface against the application narrative in roughly five minutes.
How does this fit with the rest of holistic review?
It supplements, never replaces, holistic review. Counselor letters, transcripts, standardized testing, and the personal essay all stay in their place. The vetting report adds a public-record verification layer for specific factual claims an applicant makes about external achievements. The reading committee still reads everything.
What sources does the report cite?
Crunchbase, AngelList, GitHub, ProductHunt, LinkedIn, the press surface via NewsAPI and Google News, USPTO patent and trademark records, NIH RePORTER and PubMed for research claims, the Common App taxonomy as a reading reference, and OpenAlex for academic publication verification.
Will the applicant know we ran a report?
No. Reports are private to the buying institution. Applicants are not notified that a report was run on them. The report cites only public records they have already published themselves.
What does it cost per applicant?
A Founder Snapshot is 30 credits and returns in roughly two minutes. A full Founder Vetting Report v2 is 200 credits. For dean searches and faculty hires, the Executive Vetting Report is 200 credits. Volume credit packs and institutional subscriptions are available on the pricing page.
Can we run this on faculty candidates and dean-search finalists?
Yes. The Executive Vetting Report is built for senior-leadership candidates including provosts, deans, department chairs, and tenure-track senior faculty. It covers prior leadership history, publication and grant record, governance roles, public-conduct surface, and disciplinary history where it appears in the public record.
Does this work for international applicants?
Yes, with the same caveat that applies to all international diligence work: the public-record surface is shallower outside the United States and varies by country. The methodology page documents which sources cover which jurisdictions and how the confidence framework adjusts.
What is the defamation framework?
Reports use a four-class source taxonomy and a probability yardstick adapted from the UK Professional Head of Intelligence Assessment framework. Allegations are tagged as litigation activity, never as adjudicated wrongdoing. Aggregator and forum sources are signal, never verdict. The methodology page is published in full.