AIO AEO GEO SEO
Categories
Blog

Capturing Growth with AIO / AEO / GEO – it’s part of SEO

Search is undergoing another shift. Traditional SEO – optimizing content to rank in Google’s blue links – now converges with AI Overviews, Generative Engine Optimization (GEO), and Answer Engine Optimization (AEO). These buzzwords reflect one reality: search results are increasingly AI-generated, with LLM-based assistants (ChatGPT, Claude, Google’s Gemini, etc.) synthesizing answers from multiple sources. While some herald this as a new paradigm, savvy growth marketers recognize it as an extension of SEO, not a replacement. As we’ll explore, AIO/GEO/AEO operate within SEO’s domain – leveraging the same content and authority fundamentals, but adapting tactics for AI-driven results.

AI Optimization (AIO) / Generative Engine Optimization (GEO) / Answer Engine Optimization (AEO) Definition

AIO / GEO / AEO Definition: it’s the practice of optimizing content to provide direct answers to user queries through LLMs / AI answer engines. It requires optimizing content for AI-driven search engines like ChatGPT, Perplexity, Google AI Overviews, Claude, Grok, etc. AEO is a segment of SEO that focuses on optimizing content to directly and concisely answer search queries. AIO and AEO are used interchangeably with GEO.

TL;DR KEY AIO AEO GEO STATISTICS

Definitions & Overlap:

  • 3 names for same concept: GEO, AIO/LLMO, AEO used interchangeably
  • Brand mentions: 0.664 correlation (strongest AI visibility signal)
  • Backlinks: 0.218 correlation (3X WEAKER than mentions)

Citation Sources:

  • Reddit: 40.1% (leading overall)
  • Wikipedia: 26.3% (ChatGPT’s #1 at 47.9% within top 10)
  • YouTube: 23.5% (310% increase since Aug 2024)
  • Google.com: 16.38%

Search Volume:

  • Google: 14 billion searches/day
  • ChatGPT: 37.5 million searches/day
  • 373x difference (verified)
  • Google growing 21.64% YoY

Traffic & Conversions:

  • AI traffic: 0.1-0.15% of total (growing to 1%+ for some sites)
  • ChatGPT conversion rate: 15.9% vs. Google’s 1.76% (9x higher)
  • Users spend 68% more time from AI referrals
  • AI traffic grew 9.7x over past year

AI Overview Impact:

  • Appears in 47% of searches
  • 34.5% CTR reduction on average
  • 15-70% traffic losses across industries
  • 60% zero-click searches
  • Takes up 42-48% of screen space

Crawler Activity:

  • GPTBot: +305% growth (May 2024-2025)
  • ClaudeBot: 38,065:1 crawl-to-refer ratio (most imbalanced)
  • Training: 79% of AI crawling (up from 72%)
  • 51% of all web traffic is automated (2024)

Future Projections:

  • 40% of enterprise apps will integrate AI agents by end 2026 (Gartner)
  • 25% decline in traditional search volume by 2026
  • 50% of enterprises will deploy AI agents by 2027 (Deloitte)
  • $47.1B AI agent market by 2030

What Skeptics say

  • Quality, helpful content still matters most
  • E-E-A-T principles unchanged
  • Structured data and schema markup (already used in SEO)
  • Clear writing and direct answers (already best practice)
  • Authority building and citations (already core to SEO)
  • Fast, mobile-friendly, accessible websites (already required)

Built In: “No, GEO isn’t replacing SEO, but instead can be thought of as expanding upon or supplementing it. Traditional SEO still matters for ranking in search engines, but GEO adds a new layer of search ranking focused on visibility within AI-generated answers.”

What Critics Acknowledge

Critics note these are evolutionary, not revolutionary changes. Featured snippets and position zero have required similar optimization for years. Limited Genuine Differences:

  • Query length (23 words vs 4 words average)
  • Conversational tone more important
  • Single synthesized answer vs list of links
  • Different measurement metrics (mentions vs clicks)
  • Need to optimize for being cited/quoted

Multiple critics note this follows previous “SEO is dead” cycles 🙃

  • “SEO is dead, long live Social Media Marketing” (2010s)
  • “SEO is dead, long live Content Marketing” (2012-2015)
  • “SEO is dead, long live Voice Search Optimization” (2017-2019)
  • “SEO is dead, long live GEO/AIO” (2024-present)

However, note that…

Andreessen Horowitz (a16z): “Traditional search was built on links. GEO is built on language…Unlike traditional search, LLMs remember, reason, and respond with personalized, multi-source synthesis.”

Search Engine Land: “Will GEO replace SEO – or become part of it?…Honestly, I think the most likely outcome lies somewhere in between. Most plausibly, GEO will end up as a sub-specialty rather than a standalone discipline.”

What’s Legitimately New in AIO / AEO / GEO

  1. Different output format (single synthesized answer vs list of links)
  2. Platform diversity (need to optimize for ChatGPT, Claude, Perplexity, etc.)
  3. Measurement challenges (can’t use traditional tools; need new tracking)
  4. Citation vs ranking (being quoted/mentioned vs being clicked)
  5. Black box nature (even less transparency than traditional algorithms)

Platform Shift Is Happening

Traditional Search Decline:

  • Gartner: 25% drop in traditional search engine volume by 2026 (however, important to note: traffic clicks, zero-click search is growing!)
  • Basis Technologies: AI Overviews associated with 34.5% lower average CTR

Emerging Competition:

  • ChatGPT: 700M weekly users (4x increase since last year)
  • Amazon & TikTok: Capturing product search advertising revenue
  • Google’s Market Position: Current 57% of $300B market projected to decline to 55% globally, 48% U.S. (2025-2026)

Fundamental Shift:

  • Old Model: Keyword queries → list of blue links
  • New Model: Natural language conversations → direct AI answers

Casey Winter, Growth Advisor: “Platform shifts that create both technological and distribution opportunities happen in a sequence, not all at once. The technological shift has happened with AI (like websites with the internet, apps with mobile), but AI lacks a new distribution channel yet.”

Historical Parallel:

  • Internet: Websites created 1994 → Google search (distribution) 1998 (4-year gap)
  • Mobile: Apps created 2008 → Facebook mobile ads (distribution) 2012 (4-year gap)
  • AI: Models launched 2022-2023 → Distribution shift TBD (estimated 2026-2027)

I. AI is Shifting Distribution: Search is Changing

A. New Growth Channel via LLMs 

The landscape of search optimization has definitively entered a new phase, characterized by a fundamental structural shift driven by the integration of generative AI directly into search engine results pages (SERPs). Analysis suggests that 2024 should be viewed in retrospect as the definitive “peak traffic year” for traditional organic search, foreshadowing a structural and sustained decline in raw clicks from standard organic listings. This decline is fueled by the prominence of Google’s AI Overviews (AIO), which are estimated to appear in 25% to 30% of all Google searches.  

The immediate consequence of this integrated generative content is the physical displacement of traditional organic listings. The inclusion of Google’s AI Overviews block in SERPs pushes the first traditional organic result down by an average of 140% of its previous position, substantially diminishing its visibility and potential for click capture. For organizations driven by aggressive growth targets, this structural change enforces a high-stakes choice: the “prisoner’s dilemma of distribution“. If an organization elects to postpone investment in AI search optimization, its competitors will inevitably move swiftly to capitalize on the emerging, high-value distribution channels being temporarily opened by platforms like OpenAI and Google.

What matters for AIO / AEO / GEO

Semrush Study @ June 2025:

  • Reddit: 40.1%
  • Wikipedia: 26.3%
  • YouTube: 23.5%
  • Google.com: 16.38%
  • Yelp: 21%
  • Facebook: 20%
  • Amazon: 18.7%

Platform-Specific Patterns:

  • ChatGPT: Wikipedia leads at 7.8% (47.9% within top 10 sources)
  • Perplexity: Reddit dominates at 6.6%
  • Google AI Overviews: Reddit leads at 2.2%, more distributed
  • .com domains: 80%+ of all citations
  • .org domains: 11.29% (second-most cited TLD)

Historical cycles of platform shifts demonstrate that this “platform open” window, which offers privileged organic distribution access to early movers, is rapidly compressing, becoming “shorter and shorter” with each new wave of technology. Capitalizing on this narrow window can help achieve “escape velocity” before the platforms mature and restrict access. 

B. Traffic Volume <> Conversion Quality Trade-off

While the observed reduction in raw organic click volume is noteworthy, it does not correlate proportionally with a loss in revenue across most sectors. This critical divergence is explained by the inherently superior quality of traffic driven by Large Language Models (LLMs) compared to legacy top-of-funnel clicks.  

The high-conversion uplift model associated with AEO suggests that LLM-driven traffic is dramatically superior in quality. For example, a published case study demonstrated that Webflow experienced a 6x conversion rate difference between users arriving via LLM referrals and those arriving from traditional Google search. This superior performance is not accidental; it signals a powerful mid-funnel acceleration effect. In the conventional search model, users required multiple sequential steps to satisfy informational and commercial investigational intent. The LLM effectively automates the information foraging process, acting as a trusted research agent that compresses the often protracted content consumption and comparison stages of the buying journey. Users referred by LLMs are pre-qualified and transactionally primed, having already validated their intent and options, leading to superior conversion outcomes upon site entry.  

This new commercial reality mandates an immediate redefinition of Key Performance Indicators (KPIs). The historical reliance on vanity metrics, such such as raw traffic volume or simple keyword rankings, must yield to metrics that capture generative visibility and authority. Priority KPIs for the AI era include measuring the Share of AI Visibility for critical target topics, tracking Citation Frequency across influential external sources, and monitoring the Sentiment of Brand Mentions within generative answers. The verifiable high-ROI demonstrated by the conversion uplift model fundamentally refutes the perspective that AEO is simply “hype” used to repackage old services ; the strategic and commercial necessity of AEO investment is robustly supported by empirical data.  

“There is massive overlap in SEO and GEO, such that it doesn’t seem useful to consider them distinct processes. The things that contribute to good visibility in search engines also contribute to good visibility in LLMs.” – Ryan Law 

II. Defining AIO / GEO / AEO 

A. AIO / GEO / AEO & AI Overviews vs SEO 

While often used interchangeably, the terminology surrounding AI search optimization—AEO, AIO, GEO, and AI Overviews—describes distinct facets of the same objective. AIO (AI Optimization) and AEO (Answer Engine Optimization) is the most encompassing and favored term, defining the process of optimizing assets and signals to ensure a brand is cited or featured in the direct, generative answers provided by LLMs such as ChatGPT, Claude, Gemini, and Perplexity. AI Overviews refers specifically to the generative answer block positioned at the top of Google SERPs. GEO (Generative Engine Optimization) is a less precise synonym for AEO, as AEO is preferred for its specific focus on optimizing the quality and citation of the “answer,” as opposed to the broader category of “generation” (which can include code or images). 

AIO/GEO/AEO vs SEO table

AspectTraditional SEO (Web Search)AIO/GEO/AEO (AI Answer Optimization)
GoalRank content high on SERPs (earn clicks to your site).Get content/brand mentioned or cited in AI-generated answers.
Result format10 blue links + snippets; user clicks through to websites.Single composed answer pulling from multiple sources, with citation links.
Key ranking signalsContent relevance, on-page SEO, backlinks (authority).All of SEO’s signals plus frequency of mentions across trusted sources (“citation share”). Links matter less; unlinked brand mentions matter more.
Scope of queriesOften shorter queries (avg ~3–4 words) targeting specific keywords.Longer natural-language prompts and follow-ups (avg ~25 words for chat vs 6 for search). Many niche, never-before-seen questions (a larger “long tail”).
Speed to visibilitySlow for new sites (need to build domain authority; months of work).Can be fast for new brands if mentioned on existing authoritative sites – a startup can appear in answers “tomorrow” after one popular mention.
Content requiredHigh-quality content targeting keywords/topics; technical SEO for indexing.“Answer-ready” content: concise factual snippets, clear structure, schema. Also off-site content (videos, forum posts, etc.) that can be cited.
User engagementUser clicks a result to get detailed content on site.User often gets answer directly; fewer clicks out. (Only ~1% click a cited source in Google’s AI answers) Conversion happens if your brand is presented as the solution in the answer.
Optimization mindset“Rank #1 for X keyword” mindset.“Be included in answers for X prompt” mindset. E.g. not necessary to be the #1 site, but to be referenced across many sources. Measures include share-of-voice in LLM answers, not just rank

The technical divergence between this new methodology and traditional SEO lies in the mechanism of authority transfer. Traditional SEO historically revolved around maximizing link authority (link building). In contrast, AEO success is driven by securing a high volume of quality mentions from authoritative domains. 

Three Core Mechanisms for AIO / AEO / GEO visibility:

  1. Training Data Visibility: creating well-structured content on relevant topics
  2. RAG & Grounding Data: LLMs use traditional search indexes (Bing, Google)
  3. Adversarial Examples: this is just grey-zone SEO

Shared Ranking Signals for SEO and AIO/GEO

FactorTraditional SEOGEO/AIO
Content QualityHigh-quality, helpful contentSame – AI favors authoritative content
E-E-A-TCritical for rankingsCritical for citations
Site SpeedRanking factorCrawlability for AI bots
Structured DataSchema for rich resultsSchema for AI understanding
User IntentMatch user intentIntent doesn’t change
Fresh ContentFreshness factorAI prefers citing newer content

Key Differences:

  • Links: hyperlinked (SEO) vs. unlinked mentions are valuable (AIO)
  • Success Metric: rankings/CTR/traffic (SEO) vs. citations/mentions (AIO)
  • Primary Asset: tour website (SEO) vs. rhird-party mentions (AIO)

Why AIO/GEO/AEO are Subsets of SEO (Not Separate Paradigms)

Several reasons underscore why optimizing for AI answers should be viewed as part of an overall SEO strategy, rather than a totally separate effort.

LLMs Depend on Search Indexes. AI search assistants are essentially wrappers on search engines. They don’t have an oracle of facts built-in; instead they run queries and remix search results into answers. ChatGPT with browsing, Google’s SGE, Anthropic’s Claude – all fundamentally retrieve web content. That means your classic search visibility still matters as a feeder: “your pages need visibility where assistants are looking”. GEO is described as the bridge between SEO and AI: “your classic search visibility still matters, but only as a feeder into which sources the AI decides to read.”

Google’s AI Overviews (SGE) Rely on Google’s Infra. Google’s generative Search Generative Experience (SGE, powered soon by Gemini) literally uses Google’s index and ranking signals. As one growth advisor quipped, “AI Overviews matters a LOT & relies on Google’s infra!!” – meaning if you’re not SEO-visible to Google, you won’t be in its AI answers. Thus AIO isn’t a new search engine; it’s an extension of Google’s search. This is why we increasingly see AIO/GEO offered as an add-on in SEO tool suites (e.g. Ahrefs’ “Brand Raider” or Surfer’s “AI Tracker”), not as unrelated tools.

Shared End-Goal = Satisfy the User. Both SEO and AEO ultimately aim to answer the user’s query. Eli Schwartz stresses that even in the age of LLMs, success means providing the information or solution the user seeks. If an AI overview gives a quick answer but the user’s problem isn’t solved, they’ll click through – ideally to you. In Schwartz’s view, “nothing changes [with AI Overviews]” if you focus on being the one who actually solves the user’s need. He gives the example of Tinder: an AI overview about “online dating in Dubai” might describe the scene, but it can’t actually solve the loneliness problem – the user still needs a dating app. Tinder’s SEO strategy was to have a page for “online dating in [City]” so that no matter what the AI says, the user clicks into Tinder for the solution. In essence, both traditional and AI SEO must align content to user intent and the buyer journey. AIO is just forcing marketers to tighten that alignment (and do it across more channels).

“AIO/AEO is Hype – It’s Still SEO“. There’s healthy skepticism in the industry about shiny new acronyms. A notable presentation at a Sept 2025 Growth Meetup bluntly stated: “AIO/AEO = ‘hype’ to sell services. It’s SEO…”. The speaker, Robert Kowalski, argued that agencies are slapping new labels on the same deliverables. While that may be tongue-in-cheek, it reflects the reality that you don’t throw away your SEO playbook – you adapt it. For example, he notes inbound marketing fundamentals (organic content, product-led growth, etc.) remain critical, just now including things like ensuring your brand is mentioned in AI training data and chats. The “hype” exists, but smart teams are focusing on substance: applying SEO best practices to the new AI context.

B. LLM Retention and Context Matter for SEO, Because They Matter for AIO / AEO / GEO / AI Overviews 

Despite the disruption caused by LLMs, Google’s market dominance remains a foundational truth, supported by its proprietary infrastructure and estimated 98% share of mobile searches. Google’s vast core index remains the most complete data repository, serving as the necessary grounding source for its AIO, and frequently feeding the retrieval models of competing LLMs.  

The competitive dynamics among LLM platforms (OpenAI, Google, Anthropic) are evolving rapidly. The true competitive moat is shifting away from generalized model capability and toward context and memory. The highest-value LLM is the one that can ingest, store, and leverage the most personalized user context and conversational history. This depth of memory enables the model to produce increasingly relevant and personalized outputs, creating a powerful, self-reinforcing flywheel effect that drives user retention. Current analysis positions ChatGPT (OpenAI) as the primary platform to aggressively pursue, exhibiting superior retention curves and higher engagement depth than early competitors, suggesting it has achieved “escape velocity” in securing its user base. 

This platform shift is being accelerated by the impending emergence of AI agent ecosystems. Future architectures, defined by protocols like Google’s Agent2Agent (A2A) and the Model Communication Protocol (MCP), envision a world where autonomous services (agents) call upon other LLMs and specialized APIs to execute complex, multi-step tasks. This is going to be THE game-changer in the future. This structural development makes direct API integration and the provision of structured data an increasingly vital, long-term ranking and revenue factor within these distributed AI networks.  

The data showing the growing volume of traffic from AI crawlers (GPTBot, ClaudeBot, etc.) suggests these platforms are actively building proprietary, commercial AI indexes. Consequently, allowing the GPTBot access via robots.txt is no longer merely a friendly gesture but a strategic necessity to ensure the brand secures its future visibility rights within the highly competitive generative index.  

C. Impact of Google’s AI Overviews on Search 

Global Rollout:

  • 2 billion monthly users globally (July 2025) – up from 1.5 billion (May 2025)
  • 200+ countries and territories with availability in 40+ languages
  • Global appearance rate: 18% of all Google searches
  • US appearance rate: 27.75-30% of searches
  • UK appearance rate: 19.23% of searches

User Engagement:

  • 10% increase in Google usage for query types showing AI Overviews (US & India)
  • 58% of US adults encountered at least one AI summary in Google search (2025)
  • Zero-click searches: 56% → 69% (May 2024 to May 2025)

Ranking Factors:

1. Traditional Search Rankings (Strongest Factor)

  • 76.10% of AI Overview citations rank in top 10 organic results
  • 86% of citations come from pages in top 100
  • 92.36% of AI Overviews link to at least one domain ranking in organic top 10
  • Median ranking: Position 3 overall, Position 2 for primary citation

2. Brand Web Mentions

  • 0.664 correlation – strongest predictor of AI visibility
  • Citations from authoritative third-party sources critical

3. Domain Authority & Backlinks

  • Top-cited websites: 6.7K to 25.1M referring domains
  • 86.2K to 38.3B backlinks
  • Link metrics correlation: Moderate to weak (0.10-0.25)

4. Content Structure

  • Structured data/Schema markup essential
  • Clear H1/H2/H3 hierarchy
  • Bullet points and lists favored
  • Mobile-first design: 81% of AI Overview citations from mobile traffic

5. Query-Specific Factors

  • Long-tail keywords (4+ words): 60.85% trigger rate
  • Low search volume (0-50 monthly): 35.42% trigger rate
  • Informational intent: 99.2% of triggers

Clicks vs Visibility Debate

CTR Decline Statistics:

  • Pew Research: 46.7% relative CTR reduction (8% with AI vs 15% without)
  • Ahrefs: 34.5% CTR drop for position 1 (from 7.3% to 2.6%)
  • Amsive: Average 15.49% CTR drop; up to 37.04% with featured snippets
  • Non-branded keywords: -19.98% CTR decline

The Visibility Paradox:

  • Only 8% of users clicked on website links when AI summaries shown (vs 15% without AI)
  • 1% clicked links within AI Overview itself
  • 26% ended browse session after AI Overview

Branded Search Exception:

  • +18.68% CTR boost for branded queries with AI Overviews
  • Only 4.79% of branded searches trigger AI Overviews

Industry-specific AI Overview ‘Trigger Rates’:

Highest:

  • Relationships: 54.84-62.38%
  • Business: 38.84-57.52%
  • Food & Beverage: 37.14-47.66%
  • Education: 48.70-51.86%

Lowest:

  • News & Politics: 3.76-4.50%
  • E-commerce: 2.14-2.48%
  • Fashion & Beauty: 1.34-1.46%

AI Overviews Traffic Impact

Average Impact:

  • 34.5% decrease in click-through rates when AI Overviews present (Ahrefs/DCN)
  • 34% to 46% CTR reduction across independent studies
  • 15% to 70% traffic drops reported across industries

Zero-Click Search Growth:

  • 58.5% of Google searches in U.S. result in zero clicks (2024)
  • Increased from 56% to 69% (May 2024 to May 2025)
  • Over 3 trillion searches in 2024 ended without a click

AI Overview Prevalence:

  • Appear for 47% of searches
  • Take up 42% of screen on desktop, 48% on mobile
  • Average 169 words and 7 links when expanded

Industry-Specific AI Overviews Impact

High Impact (Informational Content):

  • Publishing/News: HuffPost 50%+ traffic decline in 3 years
  • Education: Chegg 49% decline in non-subscriber traffic
  • Health & Medical: Large increase in AI Overview coverage
  • Travel & Tourism: Product reviews and guides heavily impacted

Moderate Impact:

  • E-commerce: Product reviews hit hard; transactional queries less affected
  • Finance: High engagement but adapting well
  • Legal: Growing AI traffic but maintaining conversions

Lower Impact:

  • Local SEO: AI Overviews show for only 7% of local queries
  • Branded searches: AI Overviews rarely appear
  • Commercial/Transactional: Less affected than informational

III. Why AIO / GEO / AEO is just part of SEO

A. Core SEO Principles are still here – EEAT, Freshness, Technical Excellence

The assertion that AIO/AEO represents an entirely new, distinct discipline is misleading; the optimization is still driven by the established core signals of SEO, but accelerated and intensified. AEO performance is fundamentally rooted in Google’s core infrastructure ranking signals: PageRank, RankBrain, BERT, the Helpful Content System, Reviews, and Freshness. The expert consensus frames AIO/AEO as a fast evolution of SEO.  

The most powerful filter for AI systems is E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness). For Gemini and other LLMs, the dimension of Trust (T) is paramount for content selection and citation. Content must overtly demonstrate verifiable expertise, rely on original primary research, showcase case studies, and be corroborated by credible, authoritative external citations. Technical Excellence functions as a mandatory baseline and is now a critical barrier to entry. Without technical foundations—robust crawlability, optimized page speed, and mobile-first design—AI systems cannot efficiently parse, categorize, or utilize the raw web data to formulate comprehensive generative answers.  

B. Hype vs. Reality: AIO/AEO is above all a Sales Tactic 

While some experts contend that AIO/AEO is predominantly “hype to sell services,” stressing that the underlying technological infrastructure and marketing frameworks remain consistent with long-term goals like Inbound-Led Growth , this perspective minimizes the severe impact of behavioral changes. The core reality is that user behavior has undergone a permanent shift: users are now initiating search with longer, more conversational prompts, expecting a single, synthesized answer instead of evaluating a list of links. This shift creates a massive vacuum for highly specific content designed to address these complex, conversational user needs.  

The models themselves are incentivized to seek high-quality sources due to the risk of Model Collapse. If LLMs were to rely exclusively on unassisted, AI-generated, derivative content, the model’s knowledge base would eventually collapse upon itself, leading to systemic factual errors, hallucinations, and a complete degradation of utility. To mitigate this structural risk, LLM providers are forced to prioritize diverse, human-validated, and non-derivative content. This necessity structurally elevates the value of content that demonstrates Information Gain—new data or unique perspective not previously indexed—and actively penalizes content that is overly “typical” or merely a rewritten version of existing articles.  

C. Funnel Shift 

Generative AI is restructuring the user journey and conversion funnel.

Top-Funnel Erosion is primarily impacting generic informational queries. AI Overviews excel at providing immediate, definitive answers to questions previously handled by high-volume, low-intent content (e.g., medical advice, simple definitions). Consequently, content models relying purely on ad-monetized informational traffic, such as those used by media publishers and classic affiliate sites, will experience the greatest decline in click volume and revenue.  

The Mid-Funnel Validation stage is being dramatically compressed and accelerated. The LLM functions as a trusted, automated research assistant, handling the initial investigation and directing the user toward a specific solution or comparative asset. A user searching for complex, comparative information emerges from the LLM interaction as a highly informed, pre-qualified lead with high transactional intent. This highly efficient process means that Bottom-of-Funnel (BOFU) content (e.g., pricing, comparison matrices, feature specifications) is now considered “AI Gold”. Content focusing on specific feature, use case, and integration questions is crucial for capturing the transactional value of these accelerated users.  

The structural shift confirms that the new content scoring heuristic for LLMs is Information Gain and Atypicality. Content that relies on original research, unique data, or specialized perspectives is favored because it avoids being categorized as “typical” derivative material. This structural incentive reinforces the necessity of Product-Led SEO incorporating unique, original proprietary data. 

IV. AIO / AEO / GEO Playbook

SEO vs. AEO Ranking Factor Shift

FactorTraditional SEO FocusLLM/AEO Citation DriverStrategic Goal
AuthorityBacklinks (Link Building)Mentions (High-Authority PR/UGC)Increase Share of Voice (SoV)
ContentHigh Volume, Keyword DensityComprehensive, Conversational, Question-Answer StructureCapture Long-Tail Conversational Intent
UX/EngagementCTR, Dwell Time, Bounce Rate (RankBrain)Solution Utility, Time-to-Action, App Layer Integration (pSEO)Accelerate Mid-Funnel Conversion

A. Product-Led SEO in the Generative Era

The modern SEO asset is now defined by product utility: functional pages, interactive tools, and unique data sets, forming the core of Product-Led SEO (pSEO). SEO must now be integrated as a product question, requiring engineering, design, and product management resources to create assets that solve problems, not just pages that describe them.  

The efficacy of this strategy is demonstrated by established models like Zapier and Tinder. Zapier successfully targeted the high-intent query of connecting two specific services (e.g., “Gmail and Salesforce”), creating programmatic pages that functioned simultaneously as an SEO asset and a product marketing asset. The Tinder model focused on localizing the product to solve the “loneliness problem,” generating programmatic location pages that efficiently accelerated users toward the transactional step of downloading the application. As the LLM accelerates users past informational content, Bottom-of-Funnel (BOFU) content detailing product features, use cases, and integrations becomes critically important, positioning it as “AI Gold” in the generative funnel.  

B. pSEO and AI Automation: Scaling Quality Content for AI Indexing

To achieve the necessary scale of specialized content, marketers must embrace AI-Led pSEO. This involves deploying AI automation to structure and repurpose proprietary data into scalable SEO assets. Case studies, such as the use of LLMs by Gyfted and Prawomat to manage job content pipelines by integrating structured data, demonstrate how organizations can rapidly scale the creation of good content to validate hypotheses quickly.  

The critical challenge of AEO is scaling the volume of high-quality mentions. This is addressed by automating the monitoring and identification of citation gaps, forming the basis of the Surfer Playbook , which explicitly advises marketers to “Reach Out Where You’re Missing” and “Act When AI Cites Competitors”.  

User Experience (UX) and Engagement are validated as ultimate LLM signals. Studies confirm that engaging UX, particularly through ungated and open-access features that immediately meet search intent, drives higher engagement metrics (e.g., time on site and virality). These metrics feed back into ranking systems like RankBrain, establishing UX quality as paramount in the AIO/AEO/SEO equation.  

The requirement for success in the generative era structurally mandates cross-functional alignment. Achieving AEO success requires product teams to prioritize pSEO, technical implementation, and user experience , while marketing teams must rigorously execute PR and community outreach to acquire citations. The old siloed model is structurally incompatible with the generative era’s demand for integrated product and citation strategies.  

The underlying behavioral change of LLM users reveals that the Long Tail of Conversational Prompts is substantially larger and more valuable than traditional SEO. LLMs effectively handle complex, conversational, and lengthy queries, resulting in a significantly larger volume of highly specific questions. Answering niche queries—often concerning obscure use cases or specific features—can provide a competitive edge by capturing traffic that is already high-intent and conversion-ready.  

V. The “Citation Economy” and Brand Visibility

Citation-Driven SEO vs Link-Driven SEO

One of the biggest conceptual shifts in the AI era of search is the move from a link-driven paradigm to a citation- or mention-driven paradigm. In traditional SEO, links (and their anchor text) were the currency of authority and relevance. In the AI answer paradigm, unlinked mentions, context, and co-occurrence matter a lot more, because the LLM doesn’t care if your name is hyperlinked – it cares if your name (or content) appears often in the trusted texts it consumes.

Let’s break down how brand mentions and citations are influencing rankings, often in ways that bypass the old link graph:

  • Citations vs. Backlinks: A “citation” in an AI answer is effectively a vote of confidence, analogous to a backlink in classic SEO. But it might come from content that isn’t directly linking out. For example, Reddit discussions or YouTube transcripts where your brand is named can be cited by the AI. These would never count as backlinks (they’re plain text), yet they can drive you significant traffic and credibility via the answer engine. From an SEO perspective, it suggests search engines may increasingly value implied links or mentions. Google’s patent literature has long talked about “implied links” (mentions without href) as a potential ranking factor, and we might be seeing that now informally via AI training.
  • PR and Brand Authority: This is why classic PR is resurging in importance. When your brand is featured in publications (even without an SEO-friendly link), it’s training the AI to associate you with the topic. For instance, an LLM might answer “What’s a good tool for social media monitoring?” by saying “Many users recommend Brand24 for social media monitoring…” precisely because it has seen Brand24 mentioned in many articles or forums about that topic. Those articles might not all link to Brand24, but the linguistic association is built. SEOs should therefore track mentions as a KPI, not just links. Tools and services that measure brand mentions (and sentiment around them) can help gauge your citation authority. In Kowalski’s Growth Meetup slide, he lists “Being featured without links on highly ranked websites” as part of AIO/AEO efforts. It’s a reversal of the usual “get a link or it’s not worth it” mantra.
  • Reddit, Quora, YouTube, LinkedIn Mentions: We’ve discussed these, but to outline their role in citation-driven SEO:
    • Reddit: Mentioned multiple times because it’s a goldmine of user-generated recommendations. A brand that is organically discussed on Reddit (in a positive light) gains AI credibility. It’s also often the source of long-tail queries, which AIs handle a lot. E.g., someone asks ChatGPT a very specific question – there’s a good chance the answer is drawn from a Reddit thread where exactly that specific scenario was discussed (because niche questions often get answered in forums). If your product or content was in that thread, congrats – you’re in the answer.
    • Quora: Functions similarly. People ask all sorts of questions on Quora, which get answered by enthusiasts or experts. Those answers (if public) are indexed by Google and have high SEO themselves, but now also feed LLMs. If a Quora answer mentions your tool as the solution to a problem, an AI might echo that in a summarized form.
    • YouTube: A lot of “tutorial” or “top 5” content has moved from blogs to YouTube videos. LLMs ingest transcripts, so they get those recommendations. If in a video someone says “I use XYZ software for this task,” the LLM effectively got a verbal backlink. On top of that, YouTube is a Google property and high-authority domain, making its transcripts more likely to be surfaced in AI overviewssurferseo.com. So creating video content isn’t just for YouTube SEO; it’s for AI SEO too.
    • LinkedIn: Many B2B topics have vibrant discussions on LinkedIn or articles posted natively there. An AI might cite a LinkedIn article by a well-known expert that mentions certain companies or tools. Also, LinkedIn is a trusted domain (it’s rare spam appears there compared to random blogs), so AIs may give weight to it. Thus, even posting on LinkedIn with key terms and your case studies might help the AI pick up those associations.
  • Structured Data & Knowledge Graphs: Structured data (like schema markup) plays a subtle role. It might not directly influence LLM outputs today (since they primarily read rendered text), but it feeds into knowledge graphs and rich results, which in turn inform AI. For example, if your organization has schema on your site and is linked to a Wikipedia page or Wikidata entity, the AI can more confidently tie mentions of your brand to your official site and info. Also, Google’s Knowledge Graph is used in its AI to verify facts. If your brand has a Knowledge Panel or appears in Google’s structured data knowledge (e.g. in schema.org/Article as an author), that could increase trust. One specific tip: implement FAQPage schema. Google might surface your FAQs directly in SGE or at least use them to answer questions. Bing’s chatbot has been known to use structured data (like it might say “According to the FAQ on company.com…” if it sees that pattern). Schema won’t guarantee citations, but it’s part of making your content machine-friendly. As Visto noted, it helps map entities clearlysearchenginejournal.com.
  • LLMs vs Traditional Crawlers: Traditional web crawlers (Googlebot, Bingbot) index the web page by page, primarily looking at HTML content and links to discover new content. LLMs (in training) gobble up massive chunks of text without the live link structure. During inference with RAG (retrieval), they use a search index (which is link-based) but then read and interpret content in a way a normal crawler doesn’t. A normal crawler doesn’t “summarize” your page content to decide if you rank; it uses algorithms on keywords, links, etc. An LLM actively reads your content at query time to decide if it answers the question. This means things like tone, comprehensiveness, and even formatting can influence whether you get cited. For example, if your page has the one sentence that directly answers the question, an LLM will grab it even if the rest of the page is weak. Conversely, if your content is overly fluffy and doesn’t get to the point, you might rank in regular SEO due to links/keywords, but the LLM may skip over you while summarizing and choose a more to-the-point source. This is why content optimized for extraction (bullet points, concise answers) is emphasizedsurferseo.comsurferseo.com. Essentially, traditional crawlers index, LLMs interpret. So you must optimize for interpretability.
  • NoFollow? No Problem: In this new world, whether a link is dofollow or nofollow is almost irrelevant for visibility (though it still matters for legacy SEO ranking). If a high DA site mentions you with a nofollow link (or just text), users may not click it directly, but the AI “saw” it and will consider it knowledge. This diminishes the worry about Google’s link graph for the sake of AI ranking. In fact, you might prioritize a mention on a Wikipedia page (which is nofollow) over a followed link on a low-quality blog, because Wikipedia will yield far more AI mentions. It’s a shift in thinking from “follow links for PageRank” to “mentions in trusted text for mindshare”.

How to capitalize on citation-driven SEO:

  • Get Mentions: the technical reliance of LLMs on external corroboration confirms the Mentions Mandate: AEO success is driven by “mentions from high-authority domains”. This mechanism places a significant premium on classical Public Relations (PR) and intensive brand building. The path to authority requires building a brand identity credible enough to earn organic, high-quality mentions that validate the company’s product claims. A brand that successfully appears in an AI Overview has demonstrated strong external reputation and consistent brand building. Consequently, Citation Tracking replaces traditional link building as the core off-site metric. Marketers must meticulously track their Share of Voice (SoV) and citation frequency, actively identifying instances where the brand is absent or where competitors are cited, and strategically fill those gaps.  
  • Monitor Mentions: use tools like Google Alerts, Mention, Brand24 (fittingly) to track where your brand or product is mentioned. Particularly watch the high-authority domains and community platforms.
  • Encourage Discussion: foster community engagement around your brand. This could be via user groups, Slack communities, or encouraging user reviews on sites that get scraped (Capterra/G2 in B2B, or StackOverflow for dev tools, etc.). If those discussions happen on indexable pages, they become fair game for LLMs.
  • Quality Over Quantity: not all mentions are equal. A mention in a heavily upvoted answer or an authoritative article is worth more than 100 forum spam mentions. The AI is likely tuned (and perhaps manually evaluated) to prefer credible sources. One expert speculated that ChatGPT’s team probably tunes the source selection by asking “Do we like these citations?” and likely they “intentionally configure their algorithm to use Reddit because it’s trusted”, and would drop it if it wasn’t. The same goes for others: if a platform gets overrun by spam, the AI might deemphasize it. So focus on earning mentions in places with genuine authority or community trust.
  • Influence Key LLM Data Sources especially Reddit, Quora, YouTube, LinkedIn, Wikipedia.
    • Reddit and Quora: Reddit and Quora are recognized as foundational sources for how AI systems learn and cite information due to their scale, authenticity, and niche expertise. They are heavily cited by LLMs for nuanced, real-world discussions and answers to obscure questions not typically covered in traditional, editorial sources. The correct engagement strategy is authentic, community-led marketing on UGC platforms. Always.
    • YouTube and Video Content: YouTube is explicitly identified as one of the key data sources used by new LLM models for training. This suggests that video transcripts and associated metadata are actively influencing the LLM’s knowledge base. A distinct tactical opportunity exists for Niche Video Creation. For B2B SaaS especially. 

For robots.txt: Allow Indexing for Bots

To ensure visibility within the OpenAI ecosystem (the predicted platform winner ), brands must explicitly allow the GPTBot user-agent in their robots.txt file. This technical action ensures indexing and inclusion in Retrieval-Augmented Generation (RAG) searches:  

User-agent: GPTBot

Allow: /

For organizations prioritizing intellectual property protection, a strategic approach exists to allow the RAG index bot (for citation visibility) while blocking the training bot using different user-agents. This maneuver strategically ensures citation visibility while preventing proprietary data from being absorbed into the core model’s future training sets.  

Direct Injection via Custom GPTs

The highest-control tactic for content injection is the creation of a Custom GPT, where the brand can directly feed proprietary, verified data into the model’s localized knowledge base. This tactic is necessary to ensure the LLM uses the brand’s verified context and terminology when generating answers about the company.  

VI. Platform-Specific Ranking Mechanics ‘How-to’

Success in AEO requires implementing tactics tailored to the unique architectural and ranking signals of each major LLM platform.

LLM Platform Ranking Priorities

LLM PlatformPrimary Ranking SignalKey Technical TacticContent Focus
Google AI OverviewsE-E-A-T & Core SEO SignalsStructured Data (FAQ, How-To, Review Schema)Direct Answers, Comparison Tables, Freshness
Google’s GeminiConversational Query AlignmentConsistency in Brand Terminology (@ Mention Feature in Workspace)Comparison Assets, Hyper-Specific Q&A
Anthropic’s ClaudeClarity, Context, and AuthorityAPI Integrations (Future Proofing)Expert-Level, Conversational Tone, User Intent Alignment
ChatGPT (OpenAI)Share of Voice across CitationsGPTBot Allowance (robots.txt), Custom GPT Content InjectionMention Frequency, Community Engagement

A. How to get ranked in Google’s AI Overviews (SGE) and Gemini 

The foundation for AIO visibility remains traditional SEO performance, as 74% of AIO citations originate from URLs ranking in the top 10 organic results. The required AIO technical checklist includes:

  • Structured Data: Employ robust schema markup (FAQ, How-To, Article, Review, Product) to optimize content for machine readability and parsing by Google’s AI systems.  
  • Content Structure: Content must be structured for rapid summarization, utilizing clear headings aligned with likely search queries, and employing bolding, bullet points, and short, scannable paragraphs. Sentences must be optimized to retain meaning even when extracted from context.  
  • Direct Answers and E-E-A-T: Content should use question-forward sections that deliver immediate, specific answers within the first few sentences. AIO relies on freshness signals and the Reviews System to validate quality and reputation.  
  • Maintain/Improve Traditional SEO RankingsRanking is a prerequisite. SGE heavily “prioritizes the top 10–20 results” when compiling answers. A Surfer analysis of millions of AI overviews confirmed that assistants lean on high-ranking content. If your page isn’t in the top 20 for a relevant query, the AI likely won’t see it. Surfer found their pages optimized for classic SEO were 2× more likely to reach Google top 10 within 30 days, which in turn increased their inclusion in AI answers. Bottom line: continue investing in SEO fundamentals – keyword-targeted content, on-page optimizations, and link acquisition – to stay in the AI feed pool.
  • Answer Up-Front and Clearly – Google’s AI needs to pull answer snippets quickly. Pages that surface the answer fast tend to win, due to “time to first token” considerations. Structure your pages in an “answer-first” way: lead with a concise summary or definition of the primary question in plain English (in one or two sentences) above the fold. Think of it like writing a featured snippet: the AI will grab the self-contained answer if it’s obvious. If the user (or a crawler) has to scroll or read extensively to find the key point, the AI might give up or cite another source. Also use FAQ-style headings (H2/H3 that pose likely questions) and provide direct answers right below. In short, optimize for extraction: content that can be copied/pasted as an answer block. A Surfer tip is to use short paragraphs, lists, or Q&A blocks that an AI can lift cleanly.
  • Use Schema and Structured Data – Implementing schema markup (FAQ schema, HowTo, Product, etc.) helps Google understand your content’s context and pull specific bits. For instance, adding FAQ schema for common questions can explicitly feed Google’s knowledge. Entities should be unambiguous: use Organization schema to tie your brand to known identifiers, Product schema for product specs, etc. Tom Lee of Visto emphasizes using “simple schema to help the system map your entity to the right concepts.” This clarity can increase the chance your content is chosen to answer an entity-specific query. Also ensure your page’s HTML is clean and semantic – no messy DOM or heavy scripts that might impede Google’s ability to retrieve content quickly.
  • Optimize “First-Party” Content for Speed and Clarity – Technical SEO is still key. Ensure fast load times and that you’re not blocking critical content behind logins, interstitials, or lazy-loading that an AI crawler can’t get past. Google’s AI can only summarize what it can crawl quickly. Using techniques like moving important text high in the HTML, using descriptive titles and headings, and avoiding clutter near the top can improve your content’s “crawlability” for AI. Google’s crawler may have a limited context window, meaning it won’t parse your entire 2,000-word article – just the top. In Visto’s GEO guidelines, they note AI models “skim and compress” and favor pages with “straightforward headings, unambiguous entities, and no padding”. So trim fluff, especially at the beginning of content.
  • Aim for Early Citation in the Overview – Even if your page is cited, being one of the first citations listed in the AI overview is ideal (users read those most). An eye-tracking study by Kevin Indig found that people pay most attention to the top part of the AI answer and often only skim the rest. Moreover, Google counts an impression for every cited URL when the overview loads, but users rarely click beyond the first few. Over 85% of users in one study clicked “Load more” to expand the AI answer, yet only ~1% clicked a citation. So being cited isn’t enough – aim to be the first or second source. How? One tactic is to cover as many subtopics as possible on your page so that the AI uses your page for multiple parts of its answer. If your content can answer several facets of the query (e.g., definition, pros/cons, examples), the AI might cite you multiple times, potentially bumping you to the top of the source list. Comprehensive “pillar” content can thus earn a greater share of the answer.
  • Monitor and Improve Click-Through if Possible – While traffic from SGE is lower (CTR down ~30% on those queries), the traffic you do get might be more qualified. Webflow observed a 6× higher conversion rate from LLM-driven traffic versus regular Google traffic. Users coming from an AI answer have essentially been pre-qualified by the conversation and trust in the assistant. This means it’s worth optimizing the content on your page to capitalize on that trust: ensure continuity between what the AI said and your landing page. For instance, if the AI overview cites you for “affordable pricing and ease-of-use,” make sure those points are immediately clear when the user clicks through. Track the queries where you’re cited and see if you can improve your content or meta data to entice clicks (though in SGE, meta descriptions may not be seen, your snippet in the answer is what matters).

Gemini prioritizes content that aligns with conversational queries and demonstrates authoritative EEAT.  

  • Content Consistency: Maintaining consistent terminology across the site and profiles is critical, assisting the Gemini model in cleanly resolving the brand and its products as authoritative entities.  
  • Structured Comparisons: Gemini favors easily extractable structures, particularly comparison tables or checklists, which it utilizes for generating definitive answers to comparative investigative queries.  
  • Internal Grounding: For enterprise users, the “@ Mention Feature” in Google Workspace allows reference to internal documents (Docs, Sheets), effectively guiding the model’s grounding data for private knowledge retrieval.  

B. How to get ranked in Anthropic’s Claude

Claude’s focus is on clarity, context, and often targets sophisticated user or developer niches.  

  • Clarity and Authority: Claude rewards content that is highly relevant, clear, and contextually rich, aiming to provide a direct, single answer. Traffic generated via Claude citations is expected to have higher conversion rates because the user is deeply primed by the conversational answer.  
  • Content Tone: Content should maintain an expert-level, conversational tone that aligns precisely with specific user intent.  
  • API Integrations (The Future Bet): Given Anthropic’s inclination toward developer tools , optimizing content for future API integrations and agent workflow consumption is a key long-term factor for capitalizing on the emerging agent ecosystem.  
  • Efficiency: The speed of Claude 4.5 can automate complex SEO tasks (research, writing, technical checks) significantly faster, enabling marketers to rapidly “test more keywords, publish more content, rank faster” than those using slower, legacy processes.  

Allow Anthropic’s Crawlers – Similar to GPTBot, Anthropic likely uses a specific user-agent when fetching web content for citations. (They announced a “Claude can now search the web” feature, which suggests a crawler is involved.) Ensure your robots.txt doesn’t block anything like User-agent: Anthropic or whatever they document. If unsure, a safe approach is to allow all AI agents. Some sites explicitly add rules for other AI agents or an Allow: / for all, given the rise of generative models.

Structured Data and Developer Docs – Claude was designed with a lot of documentation and Q&A in mind. It tends to excel at tasks like summarizing articles or answering from documentation. If you have developer-focused content or knowledge base articles, optimizing those for AI consumption is key. This includes having a clear structure (headings that outline question/answer), using schema (FAQ, QAPage schema), and providing examples. Claude’s training behavior also emphasizes high-quality, non-toxic content (due to their “Constitutional AI” tuning). So maintaining a professional, helpful tone in on-site content might influence whether Claude chooses your content over a more sensational but less reliable source.

Leverage Q&A Platforms – Claude is integrated in Poe (Quora’s AI app) and has likely been trained on a lot of Quora data. In fact, Quora threads often rank well in Claude’s written answers. For marketers, this means Quora is doubly valuable: it’s a Google SEO play and an AI training play. As with Reddit, participating in Quora questions in your niche (with genuine, in-depth answers that mention your brand when relevant) can seed Claude’s knowledge. One growth theory is that LLMs like Claude give extra weight to sources that appeared in their training data frequently. If your brand is consistently mentioned on Quora in answers about, say, “best CRM software”, Claude’s model might have a learned bias toward it. (This is a hypothesis, but supported by observations of how brand names pop up in AI outputs without current citations.) Thus, citation-driven brand building on Q&A forums and wikis can pay off across AI models.

Monitor Claude’s Outputs – Because Claude isn’t as public-web facing (many use it via API or apps), it’s harder to track where your brand appears. However, if you have access to Claude, ask it questions about your industry and company. Does it mention you? Does it cite your competitors more? Use such prompts to reverse-engineer what it “knows”. For example, if Claude is asked “Who are the top project management tools?” and it lists 5 competitors and not you, that’s a signal you need to get included in more comparisons or lists (maybe on Wikipedia, or via PR). If it answers with outdated or wrong info about your offering, plan to publish updates in sources it trusts. Anthropic has also introduced a “Citations” feature in their API to ground answers in provided documents, meaning third-party apps using Claude can feed it your content. Ensure your site is easily usable for that (maybe offer an API or clear docs so companies include your info as authoritative context).

Stay Harmless and Helpful – One unique angle: Claude’s training has a big focus on being helpful, honest, and harmless. It might shy away from content that appears biased or overly promotional. Thus, content that is educational and neutral in tone might be more likely to be favored by Claude. For instance, a well-written blog post titled “How to solve X problem – an unbiased guide” (even if your product is subtly suggested) could be more palatable to Claude’s filtering than a hard-sell page. While conjecture, it aligns with Anthropic’s ethos. So, review your high-level content for helpfulness. In an AI-driven world, the algorithm might literally be judging if your page sounds like a helpful expert or a marketing brochure. Favor the former.

Claude vs Gemini Platform Comparison

AspectGoogle GeminiAnthropic Claude
Search IntegrationBuilt on Google’s live indexPowered by Brave Search
User Base1B+ (AI Overviews), 140M (Gemini app)18.9M monthly
Context WindowUp to 1M+ tokensUp to 200K tokens
Citation StyleSources button, side panelInline citations with URLs
Verification“Double-check” featureOnly cites verifiable sources
Crawl-to-Refer5.4:1 (Googlebot)38,065:1 (ClaudeBot)

ClaudeBot Crawler Stats

Growth Trajectory (Cloudflare Data):

  • July 2024: 6.0% of AI crawler traffic
  • July 2025: 9.9% of AI crawler traffic (+3.9 points)
  • Among AI-only crawlers: Rose from 15% to 23.3% (+8.3 points)

Crawl-to-Refer Ratios:

  • January 2025: 286,930:1
  • July 2025: 38,065:1
  • Change: -86.7% (dramatic improvement but still extreme)
  • For every 1 visitor referred back, ClaudeBot crawls 38,000+ pages

Crawling Purpose:80% of ClaudeBot activity is for training (up from 72% year ago)

C. How to get ranked in ChatGPT

ChatGPT relies heavily on aggregated external signals, making its visibility contingent on off-site performance metrics.

  • The Share of Voice Metric: ChatGPT determines brand prominence based on the frequency and authority of citations aggregated via RAG search.  
  • Off-Site Priority: Investment must focus on citation acquisition in high-trust User Generated Content (UGC) channels like Reddit and Quora, and rich media sources like YouTube.  
  • Technical Tactics (The Injection Layer):
    • Explicitly allow the GPTBot user-agent in robots.txt.  
    • The creation and feeding of internal, proprietary data into a Custom GPT is the highest-control method for directly injecting verified context into the model’s localized knowledge base.  
  • Help Center Optimization (The Long Tail Win): LLMs ask highly specific, technical, or use-case related questions—content often residing in the help center. Optimizing this resource (moving content to subdirectories, improving internal cross-linking, and generating articles for niche “zero use cases”) establishes a high-ROI, low-competition channel for capturing high-value users.  

OpenAI’s ChatGPT has rapidly become a major “search” channel, reaching over a billion weekly queries as of late 2025. Unlike Google’s SGE, ChatGPT (with the browsing tool enabled) uses Bing’s web search and presents answers with cited sources for each portion of its response. Additionally, OpenAI has introduced custom GPTs (GPT-templates) that organizations or individuals can create, which opens new avenues for content discovery. Optimizing for ChatGPT involves ensuring your content is among the sources it trusts and leveraging new OpenAI features to your advantage:

Be Present in Bing Search Results – Since ChatGPT’s browsing relies on Bing, Bing SEO matters too. Much of your Google optimization will carry over (good content, keywords, etc.), but pay attention to Bing-specific factors like using Bing Webmaster Tools, submitting URLs, and incorporating semantic markup that Bing might favor. The overlap between ChatGPT’s citations and Google’s results is not 100%, but there is significant carryover. Ensure your site isn’t blocking Bing or OpenAI’s crawler – some sites accidentally did and found competitors showing up in ChatGPT in their stead. Check robots.txt for User-agent: Bingbot and also allow OpenAI’s new crawler (GPTBot) if you want your content used (OpenAI announced that disallowing GPTBot excludes content from training data and possibly from their browsing index). We suggest explicitly adding:

User-agent: GPTBot  

Allow: /

… to welcome OpenAI’s crawler. This may help ensure ChatGPT can access your site when generating answers.

Influence the Trusted Source List – ChatGPT’s answer cites only a handful of sources. Research suggests it has certain biases in source selection. For example, Ahrefs’ data found ChatGPT leans more on Wikipedia (and presumably other high-authority info sites), whereas Google’s SGE leans heavily on Reddit. Knowing this, to rank in ChatGPT’s answers, aim to be present on those favored platforms: Wikipedia, if possible (e.g. get your brand or product a page or mention if you meet notability), high-authority blogs or news sites, etc. However, ChatGPT does also use Reddit and other UGC, just perhaps to a lesser extent than Google. A broad strategy is to ensure your brand or content appears across a diverse set of citation-friendly sources: Wikipedia, mainstream publications, tech blogs, Reddit threads, Quora answers, YouTube transcripts, and so on. Each of these can be a vector into ChatGPT’s answers.

Use Citation-Driven PR and Content SeedingCitation-driven SEO is almost a new sub-discipline. Ethan Smith emphasizes that in an LLM answer, the “winner” is the one mentioned most across the citations. Thus, getting your name/content mentioned in multiple sources is key. This is more akin to PR than link building. For instance:

Get included in listicles and “Best X” roundups on third-party sites. If a prompt is “best project management software,” ChatGPT might cite a CNET or PCMag article listing tools. If you’re included there, you get into the answer. Early-stage companies can win this way without an SEO’ed site of their own. A single mention on a high-authority blog or a trending Reddit post can land you in ChatGPT tomorrow.

Seed relevant communities with genuine content. Reddit is particularly powerful. OpenAI has a partnership to use Reddit data for training, and ChatGPT’s live browsing also trusts Reddit (likely due to its rich, upvoted information). Sadowski calls “reddit community marketing…one of the best ways to influence AI models.” Share useful answers or insights on Reddit (and Quora, which he notes is “almost as important as Reddit”) that mention your brand or key product info. Important: Do this authentically – spam will be downranked or deleted by mods, and ChatGPT’s algorithms also favor highly upvoted/engaging threads. Ethan Smith describes a successful Reddit tactic: find an existing thread that is being cited by ChatGPT for a relevant query, then join the discussion with a verified account, disclose your affiliation, and add valuable info. For example, Webflow staff would comment on web design threads like “I work at Webflow, here’s a tip…” – those genuine contributions often stick and get picked up by the AI. You only need a handful of such credible comments, not thousands. This way, you effectively engineer citations in the places the AI is likely to look.

Leverage YouTube and LinkedIn – Aside from text-based sources, ChatGPT (and other AI) frequently cite YouTube videos (the transcript content) and LinkedIn posts/articles. Surfer’s analysis of 36 million AI overview citations found YouTube to be the #1 cited domain in AI results, and LinkedIn was also in the top five. Create video content for key queries (even short 3-5 minute explainers on YouTube). If the video ranks or is authoritative (good engagement), ChatGPT might cite “In a YouTube video by [Your Brand]…” especially for how-to or explainer queries. LinkedIn is another avenue: it’s a high-authority domain and ChatGPT can browse LinkedIn posts if public. Writing a LinkedIn article that answers a trending industry question (and mentioning your product) could get that post cited in an answer. At minimum, brand presence on these platforms boosts your coverage in AI answers.

Custom GPTs and Content Injection – OpenAI’s introduction of Custom GPTs (a feature for ChatGPT that lets users create specialized chatbots with custom knowledge or behaviors) presents an innovative way to “inject” your content into ChatGPT’s ecosystem. Michał Sadowski highlights that you can “create your own GPT… feed it with [your] data and voila – you pretty much inject your content directly into ChatGPT.” In practice, this means if you have a lot of content (e.g., product documentation, knowledge base, or articles), you can create a GPT that is trained on it. If such GPTs become shareable or part of OpenAI’s directory, users might query them. Even if not, having a custom GPT for your brand ensures that when you or prospects use it, the answers cite your sources and stay accurate. Also, speculation is that OpenAI could use data from custom GPTs to further inform ChatGPT’s main model or to identify trusted domains. Consider offering a GPT for your space (for example, “ACME Fitness Coach GPT” trained on your fitness content). This is early-stage and somewhat experimental, but it points to the future of “on-demand SEO”: instead of hoping the AI picks your site, you provide the AI (in a controlled way). At the very least, ensure your site’s content is in any official plugins or data integrations – for instance, if there are industry-specific ChatGPT plugins, getting listed there can channel queries to your content.

Accuracy and Brand Recall – OpenAI’s model may sometimes mention your brand without a citation if it “knows” it from training data. This is more likely if your brand is strongly associated with a topic. Work on building that association: through press, Wikipedia, high-authority mentions. Also, correct the record online to improve LLM accuracy. ChatGPT might confidently state incorrect facts about your product (based on old data or conflated info). To combat this, put out clarifying content on sources the AI pays attention to. For instance, if an incorrect pricing detail appears, publish a blog post or get a mention on a Q&A site clarifying it, so future crawls replace that misinformation. Tools like the Ahrefs Brand Raider now track accuracy score – the percentage of answers that get your facts rightsurferseo.comsurferseo.com. If you find inaccuracies, treat it like reputation management SEO: create content to set the story straight. With OpenAI starting to allow citations in its API responses (Anthropic too), citation engineering for correctness will be increasingly feasible – meaning if your site is the go-to reference, the AI will use it and get the facts right.

Google vs ChatGPT Search Volume (Verified Data)

373x Difference:

  • Google: 14 billion searches per day (5 trillion annually)
  • ChatGPT: 37.5 million search-like prompts per day (maximum estimate)
  • Based on 1 billion total daily messages × 30% search intent

Market Share:

  • Google: 93.57%
  • ChatGPT: 0.25%
  • ChatGPT search volume comparable to Pinterest (~20M/day)

Growth Context:

  • Google search volume grew 21.64% YoY (2023-2024)
  • ChatGPT traffic: 3.1 billion monthly visits (September 2024), up 112% YoY

Optimization

Core Factors based on ChatGPT algorithm:

  1. High Google Search Rankings (Most Important) – ChatGPT mirrors top Google results
  2. Authority Metrics – Domain Authority/Rating, social following, backlinks
  3. Presence in List/Comparison Articles – Featured in “Best [category]” articles
  4. Bing Optimization – Must be in Bing index (powers SearchGPT)

Content Optimization in GPT:

  • Conversational Q&A Format: Questions as H2/H3/H4 headings
  • Freshness Signals: Update content regularly, add “Last Updated” dates
  • Clear Hierarchy: Proper HTML heading structure, short paragraphs
  • Citations & Authority: Include data, statistics, real-world examples

VII. How Founders and Growth Hackers should tackle AIO

A. Hack: Creating Custom GPTs & Knowledge Bases

As mentioned, custom GPTs or any LLM-based chatbot that you can train on your data is a way of influencing outputs directly. A few angles:

  • Public-facing GPTs: If OpenAI (or others) allow sharing custom GPTs, create one that is genuinely useful in your domain, seeded with your content. For example, if you’re a cybersecurity company, a “CyberSec Q&A GPT” that cites your research could become popular. If the answers always cite your papers or tools, that spreads your reach.
  • On-site Chatbots: Many companies embed GPT-powered chatbots on their site (using their content as the knowledge base). This indirectly influences LLMs if the bots get usage and people talk about them (“According to X’s chatbot…”). Additionally, if OpenAI is ingesting content from user interactions (they say they might use conversations for training unless opted out), then many people asking your bot questions could feed the main models about your domain and preferred answers.
  • Structured Knowledge Bases: Provide an official knowledge base or developer documentation that’s easily crawlable. LLMs like to cite such pages (often they’ll quote an official docs page). If you see your competitors’ docs being cited and not yours, see why – maybe their docs have better SEO or clearer text. Improving your docs (with better introductions or summaries) might make them more cite-worthy. For example, OpenAI’s own documentation pages get cited (ChatGPT might say “According to OpenAI’s documentation…”). If you want that for your product, make your docs public, structured, and helpful.

B. Robots.txt and LLMs.txt – Control Crawling

We touched on robots.txt updates. There are two sides:

  • Allowing Crawling: As we said, explicitly allow GPTBot and others if you want inclusion. The risk is if you have content you don’t want used in training (some opt out due to copyright). But for marketing content, inclusion is usually beneficial. Sadowski’s slide clearly advocates allowing GPTBot.
  • Blocking: If there’s content that is not helpful or outdated, you might block LLM crawlers from it to avoid it confusing the AI. For example, if you have old subdomains with deprecated info, put a Disallow: / for GPTBot on those. This is a bit speculative – not sure how granular OpenAI or others enforce it, but the idea is to manage what they see.
  • llms.txt: This is a proposed new standard where site owners create an llms.txt file listing high-quality pages or summaries for LLMs to use. The idea is to guide them to your best answers. E.g., an insurance company might list “/faq.html” or “/glossary.html” as pages that the LLM should prioritize. It’s still experimental, but some sites (e.g. Reddit’s Redocly blog ran a test and found it “mostly smoke”). However, if adopted, it could be like SEO metadata for LLMs.
    • In absence of llms.txt being widely used, having a well-organized site map and HTML sitemap helps ensure crawlers can find all pages, including Q&A pages that might not be heavily linked.
  • Preventing Misuse: If your content was being inappropriately summarized or used by LLMs (say your paywalled content gets leaked via AI), there might be times you want to restrict them. For instance, Stack Overflow initially blocked GPTBot to protest AI usage of its data. As a marketer, though, you usually want inclusion.

C. Citation Engineering & Prompting Tactics

“Citation engineering” refers to actively shaping how and what the AI cites. Aside from structuring content for easy quoting, there are some interesting tactics:

  • Exact-Text Answers: If you phrase a line in your content that exactly answers a common question, the LLM is more likely to pick it up. E.g., a page that literally says “The top 3 tools for X are: ToolA, ToolB, ToolC” in a succinct way could get directly quoted, versus a meandering paragraph. Identify common questions and ensure you have a sentence or bullet that answers each directly. This is analogous to featured snippet targeting in old SEO.
  • Page Titles and Metadata: While the AI might not show meta descriptions, those still influence search. Also, the titles of pages could influence whether a source is chosen. An AI, when finding multiple sources, might favor the one whose title closely matches the question (since that suggests relevance). So, continue to craft clear, specific titles for your content – even framing them as questions or statements that match likely user prompts.
  • Refresh Content & Dates: AI systems (especially Google’s SGE) may factor freshness. Kevin Indig noted impressions and positions could be funky with AI, but also that 85%+ of users click “view full answer” which might cause more citations to load – in Google’s case those often include a variety of sources and possibly recent ones. Ensuring your content is up-to-date (show a recent date) could both improve your regular SEO ranking and your attractiveness to AI as a current source. Bing’s and ChatGPT’s browsing might similarly be biased to newer content for timely queries.
  • Positive/Neutral Sentiment: While not confirmed, one could hypothesize that AIs might avoid citing extremely negative or biased pieces (depending on the query). If your brand is mentioned in a controversial context, the AI might skip it in favor of a more neutral source. So part of citation engineering is also reputation management – making sure the narrative around your brand online is positive or at least factual. That way, whenever the AI comes across content about you, it’s safe to include. For instance, if there’s a high-ranking article “Why [YourBrand] is a scam”, the AI might mention that in a query about your brand (bad news). Getting more positive content out there (even if not to bury it in search, but to dilute in training data) is worthwhile.
  • Direct Q&A Content: Consider creating a Q&A hub or using a format like <h2>Question?</h2><p>Answer…</p> on your site. Direct question-answer pairs might be more likely to be used by an LLM (especially if it chunked content into Q&A pairs). This mirrors how StackExchange data is so useful to LLMs – it’s literally questions with best-voted answers. If your site can provide similar structure (like a knowledge base with common questions), you might see those answers being regurgitated by AIs.
  • Monitor AI Answers Regularly: Because AI outputs can change as models update or as they crawl new content, set a schedule (e.g., monthly) to check important queries on ChatGPT, Claude, Google SGE, etc. Note who’s getting cited. If a competitor shows up frequently, analyze their content or where they’re mentioned. Perhaps they released a whitepaper everyone’s citing. That’s competitive intel – you might need a similar asset.

D. Track Share of Voice and Citation Frequency

The transition from a click-based to a citation-based economy mandates performance metrics that reflect generative visibility:

  • Share of Voice (SoV): track the percentage of time their brand is featured in LLM answers relative to competitors across all target platforms.  
  • Conversion Uplift: Direct tracking of conversion rates for LLM-referred traffic is mandatory to capture the high-value acceleration effect (e.g., the observed 6x uplift). Due to unreliable last-touch referral data, post-conversion surveys or advanced attribution models are required.  
  • Citation Velocity and Quality: monitor the rate and authority of your brand mentions – a modern proxy for link velocity tracking – measures PR and community mentions eg. on Reddit.  

E. When to Bet on AEO

Your business model’s alignment with LLM usage by your users is the #1 factor in determining whether to pursue AIO / AEO.

AIO for Various Business Models

Business ModelObserved Traffic ImpactObserved Revenue ImpactStrategic AEO Focus
Publishers/AffiliatesSignificant DeclineSignificant Decline/StruggleRe-invest in Product-Led Experiences, Diversify (AI-Assisted Content)
B2B SaaSDecline/FlatFlat to Up (6x Conversion)Mid-Funnel, Feature/Use Case Q&A, Citation SoV, Technical Injection
E-commerceChanging/FlatLow Impact YetShoppable Cards, Rich Snippets, Product Schema, Local SEO (if applicable)
Early Stage StartupsLowLow, but rapidly growingCitation Acquisition (UGC, Video), Long-Tail Q&A, Rapid Experimentation

For B2B SaaS, the high conversion rate validates aggressive investment in AEO, focusing on technical integration and citation acquisition to capture the high-intent user compressed by the LLM. For Early-Stage Startups, AEO can be prioritized alongside traditional SEO. The irrelevance of low domain authority in the Citation Economy allows for rapid wins through focused community engagement and long-tail content targeting obscure questions.  

Key Platforms for AIO / AEO / GEO

Brand Mentions vs Backlinks – a shift? No. More work.

Ahrefs Study (75,000 Brands Analyzed)

Brand mentions correlate 3X MORE strongly than backlinks:

  • Brand Web Mentions: 0.664
  • Brand Anchors: 0.527
  • Brand Search Volume: 0.392
  • Domain Rating: 0.326
  • Backlinks: 0.218

The Winner-Takes-All Dynamic:

  • Brands in TOP 25% for web mentions: Average 169 AI Overview mentions
  • Brands in 50-75% quartile: Average 14 mentions (10X LESS)
  • Bottom 50%: 0-3 mentions (essentially INVISIBLE)

Reddit Optimization

REDDIT (40.1%)

  • Crowd-sourced validation via upvote/downvote system
  • 80% of Reddit users believe some questions only answerable by humans
  • $60M Google partnership (API licensing)
  • 108M daily active users (Q1 2025)
  • Reddit’s presence in search results increased 191% in 2024

Timeline – Weeks 1-4:

  • Monitor target subreddits for 1-3 weeks
  • Understand community tone, rules, content formats
  • Build karma through helpful comments (not branded)

Timeline – Months 2-3:

  • Answer relevant questions with genuine expertise
  • Use transparent branded usernames
  • Find high-ranking Reddit threads around core keywords
  • Participate authentically – AUTHENTICITY IS EVERYTHING

Timeline – Month 4+:

  • Create branded threads based on common FAQs
  • Launch own branded subreddit
  • Scale content strategy

Best Practices:

  • Be transparent (disclose affiliation) ✅
  • Follow subreddit rules ✅
  • Prioritize value & do NOT pitch ✅
  • Don’t use corporate language ❌
  • Don’t hard sell ❌
  • Don’t use AI-generated content ❌

WIKIPEDIA (26.3%)

  • Encyclopedic authority with structured, factual content
  • Neutral point of view AI models trust
  • 20+ years of established authority
  • ChatGPT’s most cited source at 7.8%

Section – TBC

YouTube Optimization

YOUTUBE (23.5%)

  • 25.21% surge in citations since January 2025
  • 310% increase since August 2024
  • 200x citation advantage over TikTok, Vimeo, Twitch combined
  • Healthcare industry leads with 41.97% of YouTube citations

Priority Content Types:

  1. How-To/Instructional: +35.6% citation increase (22.4% of citations)
  2. Visual Demonstrations: +32.5% (technique demos, software tutorials)
  3. Product Reviews & Comparisons: 25% of cited videos
  4. Verification/Examples: +22.5% (unboxing, product comparisons)
  5. News/Current Events: +9.4%

Industry-Specific Opportunities:

  • Healthcare: 41.97% of YouTube citations
  • E-commerce: 30.87%
  • B2B Tech: 18.68%

PR Optimization

Why PR is Essential:

  • 48.6% of SEO experts name Digital PR as most effective link-building tactic
  • LLM referral traffic worth 4.4x more than organic traffic
  • Brands in AI Overviews for transactional queries get 3.2x more clicks

1. Data-Driven PR Campaigns:

  • Create original research and studies
  • Pitch to journalists in your niche
  • Example: Search Intelligence Taylor Swift Super Bowl campaign resulted in 30 links from DR90+ publications (Mirror, NY Post, Fox News)

2. Journalist Outreach:

  • Respond to journalist requests on HARO, Qwoted, Featured
  • Monitor X and LinkedIn for callout requests
  • Don’t use AI – many writers dismiss AI-written pitches

3. Earned Media Strategy:

  • Target top news outlets (Reuters, NYT, WSJ, Forbes)
  • Industry publications and trade magazines
  • Local news and hyperlocal PR

ChatGPT Partners (Preferenced Citations): AP, Financial Times, The Atlantic, News Corp, Time, Guardian

LLM Crawl-to-Refer Ratios

January-July 2025 Data

PlatformJan 2025July 2025Change
Anthropic (Claude)286,930:138,065:1-86.7% (still MOST IMBALANCED)
OpenAI (ChatGPT)1,217:11,091:1-10.4%
Perplexity54.6:1194.8:1+256.7% (worsening)
Microsoft (Bing)38.5:140.7:1+5.7%
Google~11.8:15.4:1Most balanced

Geographic Patterns

North America Dominance (Q2 2025):

  • Nearly 90% of observed AI bot traffic hits North America
  • U.S.: 34.6% of global bot traffic overall

Implications

Publisher Referral Traffic Declining:

  • Google referrals to news sites: -9% (March vs January 2025)
  • April 2025: -15% vs January
  • Digital Content Next: -10% median YoY decline

Economic Model Disruption:

  • Publishers losing primary traffic source
  • Feeding AI systems without gaining traffic in return
  • Server costs increasing without corresponding revenue

Bot Activity Statistics:

  • 51% of all web traffic is automated (2024) – first time exceeding human traffic in decade
  • Bad bots: 37% of internet traffic (up from 32% in 2023)

Case Studies in AIO / AEO / GEO for 2026

Surfer SEO’s “AI SEO” Playbook

Surfer SEO (a content optimization platform) treated the AI search shift as “a category moment” and developed a playbook to dominate AI answers. Some key strategies they share:

  • Measure Assistant Visibility: Surfer built an internal AI Tracker to monitor how often their brand appears in AI results across 100+ prompts (covering problem queries, “best tools” lists, comparisons, etc.). This established a baseline Prompt Coverage (% of prompts where Surfer appears) and Citation Share (% of total citations that are Surfer’s).
    • Action item: Make a list of important questions in your niche and manually check ChatGPT, Google SGE, etc., for your presence. This will highlight where you’re missing.
  • Earn Missing Mentions (Outreach & Deals): When Surfer saw prompts where they were absent (e.g. a “Best SEO tools” roundup not including them), they proactively reached out to the content owners. Sometimes an email offering updated info or just asking for inclusion did the job. Other times they struck an affiliate or sponsorship deal to get listed. “Sometimes one email is enough,” they note. If not, consider providing an incentive (free access, commission, etc.) to be added.
    • Takeaway: Traditional PR meets SEO – ensure your product is listed in every major buying guide and list in your industry, because those lists feed the AI. It may cost a bit (affiliate payout or sponsor fee), but “the return will compound for years” if it lands you as a cited source.
  • Replace Competitor Citations by Outranking Them: Surfer noticed AI answers citing a competitor’s blog for a certain “X vs Y” comparison. Their response: write a better, more comprehensive page on that comparison, optimize it with all pertinent entities (using their Content Editor tool), and push it to rank #1. Within weeks, the assistants switched to citing Surfer’s page instead of the competitor. The lesson: even in an AI world, the best content wins. If an answer is consistently citing a specific competitor’s article, analyze it and outdo it – especially on factual clarity.
    • They emphasize factual clarity and entity coverage are the ingredients assistants rely on. If you lack domain authority to rank on your own site, an alternative is to publish on a larger site (write a guest post or get a columnist opportunity on a high-authority publication), then that content can rank and be cited.
  • Publish Bottom-of-Funnel Content First: Surfer found that bottom-of-funnel (BOFU) queries are heavily used in AI assistants. Prompts like “best X software” often lead to follow-ups like “X vs Y” or “X alternatives” as the conversation continues. The assistant will actively go fetch those comparisons.
    • Ensure you have content for all major BOFU needs: “Best [category] tools,” “[Your Product] vs [Competitor]”, “[Your Product] alternatives,” “[Category] pricing guide,” etc. If you don’t, the AI will use someone else’s content (and narrative) to answer, and you’ll be invisible. Surfer’s team prioritized creating those pages and optimizing them (often programmatically or via Surfer AI for scalability). Essentially, don’t let others shape the purchase decision in AI answers – publish that content yourself.
  • Show Up Where Assistants Look (YouTube, Reddit, LinkedIn): Echoing what we discussed, Surfer’s research on 36M AI overviews showed the same trio of external platforms dominating: “assistants pull repeatedly from YouTube videos, Reddit threads, LinkedIn articles”.
    • Surfer’s tactic: produce content on all three. They started publishing short YouTube videos for various SEO questions, engaging in Reddit discussions, and seeding thought leadership posts on LinkedIn. These are “no longer side channels… they directly shape AI answers.” The consistency of that message is striking across experts: to succeed in AI SEO, you must be active beyond your own blog.
  • Optimize for Extraction (Citability): Surfer coined making pages “citability-ready.” They used features like Auto-Optimize and an AI assistant (“Surfy”) to insert tables, lists, and Q&A blocks into their content. These elements are easily snippet-able. The result: they even got visibility when their page was only ranking ~#15-20 on Google, not just top 10.
    • This suggests that if your content is highly structured and relevant, the AI might pull it in even if it’s slightly lower ranked, especially for niche facts or specific sub-questions. So adding a well-formatted table comparison or bullet list of pros/cons could be your ticket into an AI answer (over a higher-ranked page that’s a wall of text). Think of it like old-school “featured snippet optimization” on steroids.
  • Iterate Weekly – This is a Discipline: Surfer treats AI SEO like an ongoing campaign. Every week their team would: check the AI Tracker results, ship 2-3 fixes or new assets (outreach win, new page, video, etc.), and then measure the changes in prompt coverage and citation share. Consistency builds a moat, they argue, as competitors will also be trying these tactics.
    • They outlined a 30-day sprint plan: Week 1: baseline & export top sources (expect to see Wikipedia/YouTube/Reddit heavily); Week 2: close gaps (outreach to 10 listicles, draft two BOFU pages and a YouTube script); Week 3: publish & promote (launch pages with internal links, publish video, post on LinkedIn to amplify); Week 4: measure and double down on what moved the needle. This is an excellent agile framework marketers can emulate – it mixes content, digital PR, and measurement in rapid cycles.

Surfer’s playbook epitomizes treating AEO as an integrated strategy: part SEO (technical/content), part content marketing, part digital PR, and part product marketing (ensuring your value prop is reinforced across channels). They conclude with a rallying call: “This isn’t the end of search. It’s a rebirth. The brands that show up in AI answers will set the terms of the next decade.” In other words, act now.

Gyfted & Prawomat: SEO in the AI Era

We have two case studies of our own projects (Gyfted.me and Prawomat.com) that embraced new SEO tactics from 2023–2025, demonstrating what worked – and what didn’t – in this transitional period.

  • Gyfted.me (Talent Assessments Platform)Strategy: They focused on solid technical SEO (crawlable, high-quality site), an engaging product-led UX (free ungated assessments that fulfill search intent), and research-intensive content – including programmatic SEO at scale (AI-assisted). They deliberately spent minimal ($<7k) on backlinks, proving you can grow with content and product strength. Results: Some initiatives flopped (“Meh…”) and some succeeded (“Yea!”) – presumably the open-access content and perhaps UGC data helped. One notable tactic: they repurposed job content via an AI pipeline – scraping job postings (structured data), then using an LLM to rewrite them into better content, which they published and indexed quickly. This hints at leveraging AI to populate long-tail content (programmatic pages) fast, a scalable pSEO approach. The key was 90% “deep research & creativity” in finding what content to create, and only 10% actual creation (with AI automating that part). The takeaway: AI can accelerate programmatic SEO – by turning databases or large datasets into human-readable pages that target niche queries at scale. But it must be guided by human research (to avoid spam).
  • Prawomat.com (Legal Aid “agent”)Strategy: In Jan–Apr 2025, they launched Prawomat with a “vibe-coded” core app (an interactive legal help tool), supported by AI-generated content at scale (an LLM-powered engine creating free content pages). Essentially A) AI-led programmatic SEO, B) a useful product, and C) minimal backlinking (<<$500). Learnings:
    1. User Experience and Engagement Matter! It’s not just about pumping out content. Prawomat’s team found that having an engaging product (the interactive legal tool) was crucial – “It’s not about content production.. UX + engaging product matter”. If people find value and stick around, that behavior signals and word-of-mouth help, which presumably also feeds into brand mentions and repeat usage.
    2. Most People Still Use Google – They observed that despite all the hype, “Most people do NOT use ChatGPT/Gemini/Claude… Most continue to use Google (habits are sticky!)”. This aligns with the notion that classic search isn’t disappearing overnight. So, don’t abandon SEO for a pure chatbot strategy. Rather, use chatbots as an additive channel and keep capitalizing on Google traffic, which Prawomat did by capturing searchers and then offering a GenAI-powered solution on their site.
    3. Apps Layer is Key – “Most value will be created at the apps layer,” they posit. Comparing it to how Excel (an app) captured value in the PC era, they suggest that having an application of AI (like their law assistant) is where you build defensible value, with search being the acquisition channel. In practice: they funneled Google Search users → Prawomat page → app usage → delivered value via generative AI content. This loop turned search visitors into active users, which is a deeper win than a one-time page visit.
    4. Frameworks Stay Same (but watch platform shifts) – the fundamental frameworks of growth (inbound marketing, product-led growth, etc.) remain even as the platform shifts. However, they also keep an eye on emerging concepts like “AI agents ecosystem protocols” (e.g. MCP, Agent-to-Agent communication) – acknowledging that down the line, how AI agents discover and talk to each other might become a new “SEO for agents” problem.

Brand24’s Citation Hacks for ChatGPT

Michał Sadowski (CEO of Brand24) has a talk “How to Rank Your Brand on ChatGPT,” where he shared some very actionable hacks, particularly focusing on citations and training data:

  • Mentions Beat Links – Sadowski stated the core difference succinctly: “SEO still revolves around link building, while GAIO is driven by mentions from high-authority domains.” In other words, classic PR is back. Getting an authoritative site to mention your brand/product (even without a hyperlink) can boost your presence in AI answers much more than a dozen low-quality backlinks would. This reframing is important for growth hackers: chasing DA for link juice might give way to chasing brand mentions for language models. He jokingly said PR is riding back in “on a white horse” – meaning we’re kind of going back to basics of getting written about in reputable outlets.
  • Target Training Data Sources
    • Reddit – Both OpenAI and Google have arrangements to use Reddit data. This confirms that having your brand and domain talked about on Reddit is extremely impactful. It influences both the LLM’s knowledge (training) and its real-time search results. Reddit community marketing, as we noted, is gold.
    • Quora – Nearly as important as Reddit for both training and citations. Many models were pre-trained on Quora Q&As. If your product is frequently recommended or discussed in Quora answers, an LLM might “think of it” when asked for solutions in that category.
    • YouTube – LLMs ingest a lot of data from YouTube (transcribed). In fact, he says new models use YouTube as a key training source. So, beyond just being cited for current info, appearing in YouTube content (e.g., influencers mentioning you, or your own channel’s videos being popular) plants your brand into the model’s mind.
    • Key sources will vary by model and vertical. For instance, a medical-focused model might rely more on PubMed or WebMD data, while a coding model leans on StackOverflow and GitHub. So, identify what your target AI likely was trained on. Often there’s public info or research on this. For broad consumer queries, though, the trio above (Reddit, Quora, YouTube) are universally important.
  • Surgical Mention Tactics – Once you know the sources, you can take “surgical” actions:
    • If Reddit is key, maybe organize an Ask Me Anything (AMA) in a relevant subreddit to generate discussion and media coverage.
    • If YouTube is key, collaborate with YouTubers in your niche or ensure your product is featured in top-list videos (many models would then have that in text form).
    • If Wikipedia is relevant (for many LLMs it is), work towards getting a Wikipedia page for your company (adhering to their rules) or ensure existing pages in your niche mention your brand (if legitimately notable).
  • Trigger “Deep Research” Mode – a real clever hack here: “trigger ChatGPT’s Deep Research or Agent mode to crawl your content.” Essentially, this means prompting ChatGPT in a way that forces it to go fetch information from the web (using browsing) – specifically, to your site. For example, telling ChatGPT: “Browse the official docs on xyz.com and summarize how product X compares to Y.” By doing this, you not only expose the model to your up-to-date content, but you also create a log of your site being accessed. If enough people (or you, through different accounts or automations) do this, you raise the model’s familiarity with your content. It’s a bit gray-hat, but it’s like feeding the AI your pages on demand. This could improve how it perceives your brand in future answers (especially if its answers based on your content get positive user feedback). Moreover, Sadowski suggests creating a custom GPT for this purpose (as discussed earlier) – a GPT that has your content baked in. That way, any user who uses that GPT is essentially querying your data. While this is a nascent practice, it’s about being proactive in getting your data into the AI, rather than waiting for the next training cycle.
  • Classic SEO Still Applies – Sadowski from Brand24 doesn’t dismiss traditional SEO at all – Brand24 significantly boosted visits by implementing these tactics: welcome crawlers (the robots.txt tweak), increase mentions in key sources, and the result was more AI-driven traffic.
  • Nothing is off-limits – even weird content can become part of training data. The point: if it’s online and notable, an AI might ingest it. So, in theory, even memes or viral oddities referencing your brand could help – though that’s not exactly a strategy you can plan!
  • Treat LLMs like a new audience that learns via reading the internet. To “rank” in their answers, ensure you/your brand are mentioned frequently and favorably in the content they read. It’s a refreshingly straightforward insight that flips the SEO focus from link algorithms to language models’ training data and retrieval algorithms.

Eli Schwartz’s AI-SEO Framework: Product-Led SEO

Eli Schwartz approaches the SEO + AI shift from a product and strategy angle, rather than tactical. His framework (in interviews and his book Product-Led SEO) is about treating SEO as a product to build, with the rise of AI reinforcing that need:

  • Think of SEO as a Product“The product managers should be thinking about this SEO question because it’s a product question,” Eli says. What he means is that SEO success (especially now) isn’t a dark art or pure content game; it’s about understanding your user’s journey and building an experience to capture them. With AI answers taking over simple queries, the remaining traffic is for those who integrate deeper into the journey. He suggests PMs and product teams get involved in SEO strategy, since it may require building new sections, tools, or even micro-products to meet search demand.
  • Focus on the User’s Problem (Step 1: Be the User) – Eli’s Step 1 is the one “almost everyone misses”: be the user. Have deep customer empathy – identify who they are, what problem they’re trying to solve, and what they would search for. He gives an example that at SurveyMonkey, many employees initially lacked empathy because they never used the product as a real user would. So to succeed in SEO, whether AI or not, you must first answer: What is the user going to type in? If you can’t answer that, don’t do SEO at all. This blunt advice is even more crucial with AI, because AI answers handle broad questions – the traffic left for you is often from users with a very specific problem or intent. Understanding that intent is everything.
  • Decide What to Create (Step 2: The Asset) – Step 2 is figuring out what asset or content will best serve the user and align with your business. Eli emphasizes this may not always be a static blog article. It could be a programmatic page or an interactive tool. In his Tinder example, they realized people search “online dating in [city]”, so the asset was not a blog post about dating tips, but a programmatic landing page per city giving date spot recommendations and gently introducing Tinder as the solution. Scale and format matters: they chose programmatic over editorial because covering every city manually is impractical. So, ask: What page or feature would truly answer the query? Maybe it’s a database-driven comparison table, a searchable index, a calculator, etc. In AI results, those rich pages often shine because they contain lots of pertinent info.
  • Build it like a Product (Step 3) – The third step: actually build and design the page(s) with full product treatment. Eli argues SEO content isn’t just an “SEO team” job; it should involve design, UX, engineering if needed, and certainly product managers. If you’re, say, a fintech startup, and you realize people search “[Your app] vs Competitor” – don’t just write a quick blog. Consider making a dynamic comparison tool or a polished page within your app’s domain that provides live comparisons. Treat these SEO pages as an extension of your product. This mindset ensures higher quality and usefulness, which not only pleases users (and thus AI algorithms in the long run) but also differentiates you from generic content.
  • Mid-Funnel Focus – Eli is optimistic that mid- and bottom-funnel searches will remain valuable despite AI. He points out that if you’re solving a real problem (like loneliness via dating app), the AI can’t fulfill that, it can only guide – so the click will happen when the user is ready to act. Thus, focus your SEO (and product) efforts on those mid-funnel queries where the user knows what they want broadly and is deciding how to get it. Top-of-funnel informational queries may get answered entirely in AI chat (“What’s the population of X city?” – no click needed). But “Which solution is best for X need?” or “How do I actually do X?” often leads the user to eventually click a tool or service. Build content for these decision-making moments.
  • Don’t Chase Trends Blindly – Eli confesses he initially feared AI answers would be an “apocalypse” for SEO. But he realized the fundamentals still hold; what changes is forcing people to “pivot their thinking” and shed some old tactics. For example, just shoving keywords and building random links (old habits) won’t cut it – especially if AI is summarizing. Instead of “SEO hacks,” think strategy and experience. He effectively calls out that many were complacent because those old tactics worked – now LLMs are forcing a reckoning: “How should I be driving business from search now?”. And the answer is by genuinely serving the user and perhaps doing things that don’t scale (like building features, not just content).

Schwartz’s framework won’t give you a list of quick hacks, but it ensures you’re navigating in the right direction as you apply the tactics from others. Ensure any SEO/AEO project starts with the user’s perspective, uses your unique product strengths (not just content farming), and delivers real value. If you do that, you’ll naturally align with where AI search is headed: surfacing the most useful, authoritative solutions for a query. Product-led SEO is a moat that even algorithm changes (AI or otherwise) have a hard time eroding.

TL;DR Future Trends & Protocols

MODEL CONTEXT PROTOCOL (MCP)

What is MCP?:

  • Open-source standard released by Anthropic (November 2024)
  • Enables AI applications to connect with external data sources, tools, and systems
  • Analogy: Functions like a USB-C port for AI – standardized connection method

How MCP Works:

  • MCP Clients: Integrated within LLMs
  • MCP Servers: Expose data/functionality from external systems
  • Transport Layer: Handles two-way message conversion (JSON-RPC format)

Industry Adoption (2025):

  • Anthropic: Creator (Claude integration)
  • OpenAI: Officially adopted March 2025 across ChatGPT desktop, Agents SDK
  • Google DeepMind: CEO confirmed MCP support in Gemini models (April 2025)
  • Microsoft: Integrated across Copilot Studio, Semantic Kernel, Azure OpenAI
  • Enterprise: Block, Apollo, Zed, Replit, Codeium, Sourcegraph

Benefits:

  • Single standard protocol vs. N×M data integration problem
  • Simplified workflow automation
  • Standardized security & governance
  • Developer efficiency with pre-built MCP servers

AGENT2AGENT (A2A) PROTOCOL

What is A2A?:

  • Open protocol launched by Google (2025)
  • Enables AI agents to communicate, collaborate, and coordinate actions
  • Launch Partnership: 50+ technology partners including Atlassian, Box, Salesforce, SAP, ServiceNow, plus major consulting firms

Core Design:

  • Built on open standards (HTTP, JSON-RPC, Server-Sent Events)
  • Secure by default with enterprise-grade authentication
  • Supports long-running tasks (multi-day operations)
  • Modality agnostic (text, images, audio, PDFs, JSON)

How A2A Works:

  1. Agent Discovery: Agents publish “AgentCard” detailing capabilities
  2. Capability Negotiation: Discover each other’s skills
  3. Task Lifecycle Management: Structured states (submitted, working, completed)
  4. Bidirectional Communication: Request/Response with Polling or Streaming

A2A vs MCP – Complementary Protocols:

  • MCP: Connects agents to tools and data (primitives with structured I/O)
  • A2A: Enables agent-to-agent collaboration (autonomous applications)

Example:

  • MCP: Protocol to connect with structured tools (e.g., “raise platform 2 meters”)
  • A2A: Protocol for end-users to work with agents conversationally (e.g., “my car makes a rattling noise”)

AI AGENT ECOSYSTEMS

Gartner (August 2025):

  • 40% of enterprise apps will integrate task-specific AI agents by end of 2026 (up from <5% now)
  • 25% decline in traditional search engine volume by 2026
  • Best-case scenario: Agentic AI could drive ~30% of enterprise application software revenue by 2035 (>$450 billion)

Other Forecasts:

  • Deloitte: 25% of enterprises using GenAI will deploy AI agents by 2025, growing to 50% by 2027
  • Capgemini: 82% of organizations plan to integrate AI agents by 2026
  • MarketsandMarkets: Global AI agent market projected to grow from $5.1B (2024) to $47.1B (2030), CAGR 44.8%

Evolution:

  • 2024-2025: Task-specific agents (customer support, data analysis)
  • 2025-2026: Multi-agent systems with specialized roles
  • 2026-2027: Agentic ecosystems with autonomous collaboration
  • 2027+: Agent-to-agent marketplaces and dynamic team formation

2026 Predictions

AlixPartners:

  • Google’s share: 57% (2024) → 55% global, 48% U.S. (2025)
  • Marketers will shift SEO strategies toward AI-generated summaries

Gartner (August 2025):

  • 40% of enterprise applications will feature task-specific AI agents by end 2026
  • Traditional search engine volume will drop 25% by 2026

Forrester (Predictions 2026):

  • Enterprises will delay 25% of AI spend into 2027 due to ROI concerns
  • Majority forced to build “agentlakes” (composable agent architectures)
  • 30% of large enterprises will mandate AI training

2027:

  • AI Overviews: 6-7% of Google search ad revenue
  • 50% of enterprises using GenAI will have deployed AI agents (Deloitte)
  • Established multi-agent marketplaces and ecosystems

2028-2030:

  • 15% of day-to-day work decisions made autonomously by agentic AI (Gartner)
  • 33% of enterprise software includes agentic AI
  • AI agent market reaches $47.1B (MarketsandMarkets)

Sources: Stanford AI Index 2025, Ahrefs research (1.9M citations, 75,000 brands analyzed), Semrush data (150,000 citations, 5,000 keywords), SparkToro/Datos Research (search volume comparisons), Cloudflare (crawler traffic data – billions of requests), Gartner, Pew Research Center