MentionFox
Methodology · v1.0

How Foxhole scores companies

No black box. Here's exactly what we measure and why.

01Signal taxonomy

Foxhole watches fourteen signal types across the public surface area of every company on the radar. Each signal carries a weight, a detection method, and a stated reason for inclusion. Active signals (those producing scores today) are listed first; signals in the constraint set ready to activate as collectors ship are listed below.

Engineering Acceleration

Tier 1 · high weight

What it measures

A sustained spike in commit cadence across a company's public repositories compared to its own 30-day baseline.

How we detect it

We pull commit metadata from public GitHub organisations on a daily cycle and compare current-window throughput to baseline. Ratios above 1.7x register as a signal.

Why it matters

Commit velocity doubles three to six weeks before fundraise announcements and major enterprise releases. The pattern shows up before any wire hits.

Executive Function Hire

Tier 1 · high weight

What it measures

A first-time senior leadership hire that signals a transition from founder-led to function-led execution: VP Sales, Head of Operations, CFO, COO, Head of Customer Success.

How we detect it

Daily scan of company About / Team / Careers pages, cross-referenced with public LinkedIn role announcements and press releases.

Why it matters

The executive function hire is the canonical "we just outgrew our seed motion" signal. Bootstrapped companies making it tend to be preparing for either enterprise scale or a first institutional round.

Enterprise Hiring

High weight

What it measures

Roles that only exist at companies entering enterprise sales motion: VP Sales, Enterprise AE, Solutions Engineer, Customer Success Manager, RevOps lead.

How we detect it

Daily ingest of public job postings across major boards (LinkedIn Jobs, AngelList, Wellfound, company career pages). Classifier groups roles into enterprise-motion buckets.

Why it matters

Hiring an enterprise sales motion changes a company's runway math. If a bootstrapped company starts hiring AEs without first raising, the next move is usually one of two things: a contract pipeline already exists, or a round is imminent.

SOC2 / Compliance Hiring

High weight

What it measures

A first-time security, compliance, or controls hire: SOC2 Lead, Compliance Officer, Security Engineer (compliance focus), or DPO / Data Protection Officer roles.

How we detect it

Pattern-match against the same job posting feed used for Enterprise Hiring, with a separate keyword set tuned to security and compliance language.

Why it matters

Enterprise prospects ask for SOC2 before they sign. A company hiring its first compliance lead is preparing for contracts that require it. This is operational scaling without the press release.

Contributor Growth

Medium weight

What it measures

An expansion in the unique-contributor count across a company's public repositories, especially when the new contributors are full-time team members rather than community PRs.

How we detect it

Track unique commit authors across public organisations over rolling 30-day windows. Filter for committers whose contribution velocity suggests employment rather than drive-by community contribution.

Why it matters

Quietly growing the contributor pool is a leading indicator of team expansion that often pre-dates any public hiring announcement.

Star Acceleration

Medium weight

What it measures

An acceleration in GitHub stars on a company's main public repositories, normalised against the repository's own historical growth rate.

How we detect it

Daily star-count delta against the trailing baseline. Acceleration ratios above 2x register as a signal. We weight new-star sources for quality — a star from a recognised practitioner counts more than a star from a brand-new account.

Why it matters

Star acceleration is a community-pull signal. It tells you the practitioner audience is starting to notice. For open-source-first companies, it precedes enterprise pull-through.

Serial Bootstrapped Builder

Tier 1 · high weight

What it measures

A founder track record of shipping multiple public products independently — not first-product founders, but operators on their third or fourth shipped thing.

How we detect it

Cross-reference founder accounts against ProductHunt maker histories, GitHub commit graphs across multiple repositories, and public domain registration patterns.

Why it matters

The pattern predicts which lean teams will actually scale. A founder who already shipped three products knows the parts of building that don't get romanticised. They tend to underpromise and outbuild.

Bootstrapped Milestone Tweet

High weight

What it measures

A public revenue, customer, or growth milestone announcement from a verified founder of a bootstrapped company.

How we detect it

NLP classification of public social posts from verified founder accounts. Filter for milestone language ($X ARR, Y customers, Z% growth, profitable, sustainable, etc.) tied to a known bootstrapped entity.

Why it matters

Bootstrapped founders only announce milestones they want enterprise prospects (or future investors) to know about. The post itself is a deliberate signal of inflection.

Signal types ready to activate

The following signal types are in the scoring constraint set and ready to activate as their collectors ship. They appear in the schema today; live data appears in scoring as each collector goes online.

Cross-Platform Pain Mentions

Tier 1 · high weight

What it measures

Velocity of unmet-need conversations on public forums, GitHub issues, and product community threads — the categories of problems users are looking to solve before the companies that solve them exist.

How we detect it

Cross-platform crawl of public discussion surfaces, NLP-tagged for pain-signal language and grouped into emerging categories. Acceleration ratios above 1.5x flag the category.

Why it matters

The most asymmetric calls in venture come from spotting a category in motion before any company has scaled into it. This signal is the leading edge of that.

Emerging Vendor Recommendation

Tier 1 · high weight

What it measures

Practitioners on public threads recommending a specific vendor as the "new" or "better" alternative to incumbents in a category.

How we detect it

Pattern-match against public posts containing recommendation language paired with a vendor name, where the vendor has below-threshold incumbency in the category.

Why it matters

Word-of-mouth from practitioners is the most expensive signal to fake. When a vendor is genuinely being recommended in their category, scale follows.

Tech Stack Migration

Tier 1 · high weight

What it measures

A meaningful change in the infrastructure layer beneath a company's product — database, observability stack, deployment platform, identity layer.

How we detect it

Pattern-match against public technical blog posts, conference talks, public job postings citing specific stacks, and public BOM disclosures.

Why it matters

You don't migrate stacks until the cost of the migration is less than the cost of not migrating. The signal tells you a company has reached the scale where infrastructure choices matter more than founder taste.

Operational Scaling Without Capital

High weight

What it measures

A bootstrapped company with no recent fundraise that is nonetheless hiring across multiple operational functions in parallel.

How we detect it

Composite: funding status from public registries combined with the hiring velocity feed from Enterprise Hiring and SOC2 Hiring signals.

Why it matters

This is the operational definition of "bootstrapped company entering enterprise scale." The signature signal of Foxhole's whole thesis.

Founder Substack Growth

Medium weight

What it measures

Audience and engagement growth on a founder's public Substack, newsletter, or long-form publication.

How we detect it

Public subscriber counts, public post engagement counts, cross-reference against founder identity verified against the company entity.

Why it matters

Founders with growing public audiences have asymmetric distribution. They can launch and scale things with less paid acquisition than competitors. The audience is itself an asset.

Founder Twitter Engagement Quality

Medium weight

What it measures

Engagement quality on a founder's public social posts — not raw follower count, but the depth of interaction from recognised practitioners in the founder's category.

How we detect it

Engagement graph analysis on public posts: who replies, what their profile says they do, whether they're cited elsewhere in the category.

Why it matters

A founder whose threads pull in genuine domain practitioners has a different category position than a founder whose threads pull in only followers. The first is a category authority; the second is an audience.

02Scoring formula in plain language

A company's total Foxhole score is a weighted sum of its active signals, with recency decay, and a multiplier that applies only under specific combinations.

Base score is the sum of (each detected signal × its strength × its weight × a recency decay factor). Signals carry strengths between zero and one based on how confident the detection is. Weights are higher for Tier 1 signals and lower for confirmatory signals.
Recency decay reduces the contribution of older signals. Signals from the last thirty days count at full weight. Signals between thirty and sixty days old count at half weight. Signals between sixty and ninety days count at a quarter weight. Signals older than ninety days drop out entirely.
BOOST multiplier applies when specific signal combinations fire together. A Tier 1 signal paired with bootstrapped or no-recent-funding status multiplies the score by 1.5. A Tier 1 signal paired with a serial-bootstrapped-builder signal also multiplies by 1.5.
Multipliers stack but are capped at 2.0x total. A company that triggers both BOOST conditions does not get 2.25x — it gets 2.0x. The cap exists because at some point further multiplier becomes overconfidence, not measurement.
Capital efficiency modifier adds or subtracts up to 0.1 to the final score based on the company's Capital Efficiency Score, described below.

03Capital Efficiency Score

The Capital Efficiency Score (CES) measures how much real-world output a company appears to produce per unit of headcount. It runs alongside signal scoring and contributes a small adjustment to the final number.

Inputs

Public headcount estimates from major directories, the company's own pricing surfaces (where published), and public traffic indicators. No private revenue data — ever.

Confidence bands

Every CES score carries a confidence band reflecting how complete the input picture is. A company with public pricing, public team page, and public traffic data scores at higher confidence than a company with only one of those.

Tiered output, not point estimates

We surface tiers (Exceptional / Strong / Average / Below average / Insufficient data) rather than a precise revenue-per-employee number, because the uncertainty in any single component makes a point estimate misleading.

TierWhat it means
ExceptionalPublic surfaces suggest the company produces more output per employee than the strong majority of comparable companies in its sector.
StrongAbove the median of comparable companies; the company appears to be using each headcount efficiently.
AverageIn line with comparable companies in its sector and stage.
Below averagePublic surfaces suggest output per headcount lags comparable companies. Could be true; could also mean public surfaces are out of date.
Insufficient dataNot enough public surface area to produce a confident score. The CES modifier does not apply at this tier.

Why tiers and not exact revenue figures: any inferred revenue number is a guess wrapped in math. Publishing it as a precise figure invites legal risk and gives investors false certainty. Surfacing a tier with a confidence band tells the truth: "here is what the public surface suggests, and here is how confident we are in that suggestion."

04What we don't do

The negative space matters as much as the positive. These are categories of intelligence that other tools build and Foxhole deliberately does not.

05Data sources

Every signal Foxhole produces is traceable to a publicly verifiable source. The categories we draw from:

05bReceptiveness Radar

Foxhole's signal score answers should I reach out. Receptiveness answers a different question — when. Five operational, observable signal sources, each completely optional. Receptiveness is never folded into the main signal score; it is a separate chip and a separate sort option.

What feeds the Receptiveness Radar

What we don't do

The Receptiveness Radar is deliberately narrow. The following are out of scope — permanently.

How we interpret tiers

High receptiveness appears when multiple operational signals indicate an open window — for example, a fresh senior hire combined with elevated posting activity and recent public appearances. The recommended frame is "reach out this week, not next month." A high tier is a description of the founder's current observable activity pattern, not a prediction that they will engage with any specific message.

Receptive (medium tier) appears when some operational signals are present but the picture is mixed. The recommended frame is "reasonable to reach out within the next two weeks." Combine with the company's main signal score and your own thesis fit.

Low and insufficient-data tiers are never shown publicly as chips. A low tier means the founder does not currently show the observable activity pattern that correlates with open windows — it is not a judgement of the founder or the company. Insufficient-data means we could not find enough public signal sources to score with reasonable confidence.

Confidence scales with sources. When all five operational signal sources return data, confidence is high. When only one or two return data, confidence is low and the score is suppressed to insufficient-data regardless of the underlying values. Receptiveness scores expire after fourteen days and refresh on the daily scoring cycle.

06Confidence and limitations

Honest framing of what Foxhole is and is not.

We're new. Foxhole scoring improves with longitudinal data. Companies that have been on the radar longer have richer signal histories and more reliable scores. The first three months for any single company are the noisiest.

We surface the methodology so you can challenge it. If you think a weight is wrong, tell us. If you think a signal type is missing, tell us. We update the methodology in public.

If we score a company wrong, tell us. Every score is reviewable. Every signal carries its source URL. We don't ask you to trust us — we ask you to check us.

No predictions, no guarantees. A high Foxhole score is a description of a company's current public-signal pattern, not a prediction of outcome. Companies with high scores can fail. Companies with low scores can win. The score is a sourcing prior, not a forecast.

See it on a real company

Open the sample report to see how this methodology surfaces as a four-section sourcing brief, or request access if you want the live feed for your thesis.