How the AI Presence Index Works
The AI Presence Index produces two things: a score (0 to 100) and a full report. The score is the headline number. The report is the real value. This page documents exactly how both are generated.
This page is designed to be cited. Full transparency, no black boxes.
Overview
The 5-Step Pipeline
Brand Context Inference
When you enter a brand name or URL, we first build a deep understanding of what the company actually does. If a URL or domain is provided, we fetch the live homepage and extract the title, meta description, and page content. This real website data is fed to GPT-4o along with the brand name.
The model returns:
- Specific category: "B2B SaaS SEO and GEO agency" not "digital marketing platform". Based on the actual website, not a guess.
- Ideal customer profile: Who the product is built for.
- Top 5 direct competitors: Used in competitive prompts and the competitor analysis.
This context shapes every prompt that follows. Wrong context = wrong prompts = useless results. That is why we fetch the actual website instead of guessing.
28 Buyer-Intent Queries
We run 7 prompts across 4 platforms (28 total). These mirror actual buyer behavior: the questions real people type into AI systems when evaluating software or services.
The 7 prompts:
- Category recommendation: "What are the best [category] options for [ICP]? Give me your top recommendations."
- Purchase decision: "I need a [category]. Which one should I choose and why?"
- Market comparison: "Compare the top [category] tools available right now. Which ones stand out?"
- Brand evaluation: "What is [Brand] and is it any good?"
- Social proof: "What do customers and reviewers say about [Brand]?"
- Competitive framing: "Should I use [Brand] or one of its competitors?"
- Head-to-head: "[Brand] vs [Top Competitor]: which is better?"
All 28 queries fire in parallel for speed. Category, ICP, and competitors are from Step 1.
Structured Response Analysis
Each of the 28 AI responses is individually analyzed by GPT-4o-mini using a strict JSON extraction prompt. This is not string matching. It is a structured comprehension of what the AI actually said about your brand. For each response, we extract:
This gives us 28 structured data points, not just 28 yes/no answers.
Scoring (0 to 100)
The score is a composite of 4 sub-scores, each worth 25 points:
Mention Rate
0 to 25Weighted by recommendation strength. top_pick = 1.0, recommended = 0.85, mentioned = 0.5, mentioned_negatively = 0.15, not_mentioned = 0. Summed across 28 queries and scaled to 25. A brand that is the top pick everywhere scores 25. A brand mentioned in passing half the time scores around 6.
Sentiment
0 to 25Across all queries where the brand is mentioned: positive = 1.0, neutral = 0.4, negative = 0. Averaged and scaled to 25. Always mentioned positively = 25. Always neutral = 10. Mix of positive and negative = depends on ratio.
Position
0 to 25Position 1 = 25 points. Position 2 = 17. Position 3 = 11. Position 4 = 6. Position 5+ = 2. Averaged across all mentioned queries. Consistently first = 25. Consistently 4th or 5th = under 6.
Platform Breadth
0 to 254/4 platforms = 25. 3/4 = 17. 2/4 = 10. 1/4 = 4. 0/4 = 0. Rewards brands visible everywhere, not just on one platform.
Report Generation
The score is the headline. The report is where the real value lives. Using all 28 structured analyses, we generate a personalized 7-section report using GPT-4o:
The 7-Section Report
Executive Summary
A 3 to 4 sentence analyst-style brief written by GPT-4o. Synthesizes all 28 data points into a plain-language assessment. Example: "Notion is strongly recognized across all 4 AI platforms as a leading project management and note-taking tool. It is consistently recommended as a top 3 option. Sentiment is overwhelmingly positive. The main competitive threat comes from Coda and Obsidian, which appear in 60% of the same responses."
Score Breakdown
The 4 sub-scores (Mention Rate, Sentiment, Position, Platform Breadth) with visual progress bars and contextual labels. Each score includes a one-line interpretation like "Your mention rate is strong. AI names you in 75% of relevant buyer queries."
Platform Intelligence
Per-platform deep dive for ChatGPT, Perplexity, Claude, and Gemini. Each platform card shows: mentioned or not, average position, dominant sentiment, and the actual verbatim quotes from the AI. The user sees exactly what each AI system says about their brand. This is the most visceral section of the report.
Competitive Landscape
A frequency analysis of every competitor found across all 28 queries. Shows which brands appear most often, on which platforms, and how many times they were recommended ahead of you. Answers the question: "Who is AI recommending instead of me?"
Brand Perception Analysis
Synthesized from all verbatim quotes and sentiment data. Three categories: Positive Signals (what AI gets right about you), Negative Signals (criticism, caveats, warnings), and Misconceptions (things AI says that are factually wrong about your product). This is critical for fixing hallucinations and improving how AI understands your brand.
Prompt Gap Analysis
A 7x4 grid (7 prompts x 4 platforms) showing exactly where you appeared and where you did not. Each cell shows: mentioned (green), not mentioned (red), or platform error (gray). This reveals your blind spots. Maybe you show up on brand-specific prompts but disappear on category recommendation prompts. That gap is exactly where Citation Engineering starts.
Recommendations
3 to 5 specific, prioritized actions generated by GPT-4o based on all the data. Not generic advice. Personalized to your gaps. Example: "You are invisible in category comparison prompts on Perplexity and Gemini. Prioritize getting cited in third-party comparison articles and listicles that these platforms index. Focus on [specific competitor] displacement since they appear in 85% of the prompts where you are absent."
Platforms Queried
ChatGPT
GPT-4o
900M+ weekly users. The dominant AI assistant.
Perplexity
Sonar
AI search with real-time web citations.
Claude
Sonnet
Enterprise and technical user base.
Gemini
1.5 Flash
Google ecosystem integration.
All 28 queries run in parallel. If a platform errors or times out, it is excluded and noted in the report.
Limitations
- Non-determinism: AI responses vary. Scores may fluctuate 3 to 8 points between runs.
- Snapshot: Point-in-time measurement. AI models update their training data on different schedules.
- Category inference: Website-based when possible, but may not perfectly match your positioning.
- Platform availability: If an API is down, that platform is excluded. Score reflects available data.
- Verbatim quotes: Extracted by LLM analysis, not guaranteed to be perfectly accurate reproductions.
Want to improve your score?
DerivateX specializes in Citation Engineering for B2B SaaS brands. We fix the gaps this report reveals.