
The number on your dashboard is not a vibe. It is a weighted average of how often the four scoring engines name your brand as the recommended answer to buyer-shaped queries. Below is what each band means, real sample scores from live brands, what actually moves the number, and how long change takes.
Your score is a weighted win rate, scaled 0-100. Win rate means the percentage of buyer-shaped queries where your brand was named as the recommended answer or in the top three with a positive justification. The four scoring engines, with weights, are listed in the methodology page.
The numbers below are pulled from the live promoter database, computed across the four scoring engines, normalized 0-100. They are real and they update.
Considered → Recommended. Live data, all conversations to date.
| Engine | Conversations | Wins | Win rate |
|---|---|---|---|
| Gemini Flash | 2,164 | 1,275 | 58.9% |
| GPT-4o-mini | 2,109 | 1,511 | 71.6% |
| DeepSeek | 515 | 147 | 28.5% |
| Mistral | 193 | 55 | 28.5% |
Weighted score: (58.9 × 0.30) + (71.6 × 0.30) + (28.5 × 0.25) + (28.5 × 0.15) = 17.7 + 21.5 + 7.1 + 4.3 = 50.6 raw, then normalized through a non-linear curve that rewards getting above 50% on the two highest-weight engines. Final published number: 63.
Reading: MentionFox is now in the recommended zone for social listening and lead-finding queries on the engines that drive most buyer traffic (Gemini, GPT-4o-mini). Smaller engines (DeepSeek, Mistral) lag because the brand is newer and has less long-tail content out there for those models to ground on. The closing strategy is generated content for the long tail, not more conversations on the strong engines.
Honest gap: tracked as a co-profile reference, not yet run through the promoter loop.
Anthropic is a brand we co-profile (their Claude API powers parts of our stack) but we have not run them through the seven-LLM panel as their own subject. Doing so would be performative — they do not need GEOFixer to be findable. We list them here so you can see the difference between a brand that uses GEOFixer (MentionFox, RiteKit) and a brand that is referenced but not subject-tested.
Live baseline measured 2026-05-09. Day 0 of a 30-day case study.
| Engine (baseline panel) | Conversations | Cited | Cite rate |
|---|---|---|---|
| ChatGPT | 11 | 2 | 18.2% |
| Gemini | 11 | 0 | 0% |
| Claude (panel only) | 11 | 0 | 0% |
| Perplexity | 11 | 0 | 0% |
Total: 44 conversations, 2 cites, 4.55% raw rate. Top competitors named in those conversations: Buffer (21 cites), Hootsuite (20), Sprout Social (19), Later (15), SocialBee (9). The 30-day plan and live measurement is documented in the RiteKit case study.
Your score is mostly explained by three variables, in this order:
What rarely explains a low score: pricing, your design system, your logo, or how good your product actually is. Engines do not see those. Engines see text and links.
| Lever | Typical lift | Time to land |
|---|---|---|
| Generated long-form content for losing query categories. Autopilot finds the queries where you lose, drafts content, you publish (or we publish to the shadow site). | +8 to +20 | 4-8 weeks |
| Structured comparison pages (you vs your top three competitors). Engines retrieve these heavily for "X vs Y" queries. | +5 to +12 | 2-4 weeks |
| Active conversation training on the engines where you lose. Multi-turn conversations that surface your differentiators in context. | +3 to +8 | 1-3 weeks |
| Shadow site serving for AI crawlers (clean structure, schema markup, no JS chrome). Levels the playing field for crawler-readability. | +2 to +6 | 1-2 weeks |
| Earning a citation in a high-authority source (your category's top listicle, a Wikipedia footnote, a respected blog round-up). | +3 to +10 per citation | 4-12 weeks (PR work) |
| Renaming your brand to be unambiguous, fixing brand-name collisions, owning your slug. (Counts only if you actually have a collision.) | +1 to +15 | Varies |
Compounding matters. A single content piece moves you a little. Twenty pieces over twelve weeks compound. Trackers will not give you twenty pieces. Autopilot will.
The honest answer is "longer than you want, faster than SEO." Specifically:
Anyone who promises "+30 points in 60 days, guaranteed" is either lying or planning to game the score. The category, the starting depth of content, and how aggressively you publish dominate the result. We promise the methodology, the measurement cadence, and the work. The number is the number.
Five-day free trial. The seven-LLM panel runs against your domain on day one. The score and the engine-by-engine breakdown go live on your dashboard.
Get my baseline score