MentionFox
Live Case Study

Aggressive Autopilot vs Manual GEO Optimization — RiteKit's 30-Day Story

We turned MentionFox Aggressive Autopilot loose on ritekit.com on 2026-05-09. No human in the loop, no hand-tuned content. Goal: see how 30 days of auto-driven AI visibility work moves the needle vs the day-0 baseline.

measurement in progress
Day 0 · Baseline

Where RiteKit started

Per-LLM win rate, top cited competitors, and sample winning + losing conversations.

Read the baseline →
Day 7

First-week movement

[Day 7 TBD — measurement in progress]

Day 14

Mid-window read

[Day 14 TBD — measurement in progress]

Day 21

Compounding effects

[Day 21 TBD — measurement in progress]

Day 30 · Final

30-day final story

[Day 30 TBD — measurement in progress]

Methodology

Every snapshot uses the same fingerprint so the deltas are comparable.

What "Aggressive Autopilot" actually does

It's the same engine MentionFox runs for itself, just unsupervised. Aggressive mode pushes the LLM allocation toward the cheaper, faster models (gemini_flash + gpt4omini do the bulk of conversations), keeps Claude switched off to control cost, and lets the question bank cycle without manual approval. Every winning conversation feeds the next prompt's context. Every losing one feeds the next round of GEO content. Thirty days, no human babysitter — that's the test.