- The SEOs Diners Club
- Posts
- π₯ SEOs Diners Club #218: The AI Visibility Checklist: A Comprehensive GEO Roadmap Compiled from 4 Weeks of Research
π₯ SEOs Diners Club #218: The AI Visibility Checklist: A Comprehensive GEO Roadmap Compiled from 4 Weeks of Research
Over the past four weeks we examined every piece of the AI visibility puzzle, from AEO science to Agentic Engine Optimization, from a 98,000-row data analysis to Google's "we raised the indexing bar" admission. This week we put the pieces together: AirOps, Indig, Osmani, Barnard, and Google Toronto data consolidated into a nine-category GEO checklist. Plus: ChatGPT ads shift to CPC, and OpenAI releases GPT-5.5.

Hey everyone!
Over the past four weeks we've been examining puzzle pieces one by one. On March 28 we covered the science of AEO: AI doesn't rank pages, it selects fragments. On April 4, Kevin Indig's 98,000-row data analysis declared "AI citation is a content architecture problem." On April 11, Sundar Pichai painted the vision of "search becoming an agent manager." On April 17, Addy Osmani wrote the technical roadmap for Agentic Engine Optimization.
This week we complete the puzzle. We combined all of those studies, the citation rules AirOps proved across 16,851 queries, the "we raised the indexing bar" message from Google Search Central Live Toronto, and the most current GEO checklists in the industry to build a comprehensive, nine-category AI visibility roadmap. AI visibility is now a checklist in front of you.
Grab your coffee. This week we're turning theory into practice.
π― The AI Visibility Checklist β A Comprehensive GEO Roadmap from 4 Weeks of Research
Why We Need a Checklist
AI search visibility is no longer a single optimization technique. It's a system where dozens of signals across nine distinct categories work together. We combined the research we examined over the past four weeks (Princeton/Carnegie Mellon AEO study, Indig's 98,000-row analysis, Osmani's AEO guide, Barnard's topical ownership model), this week's AirOps data, and the most current GEO checklists in the industry to build a comprehensive roadmap.
The Nine-Category GEO Checklist
We organized this checklist into nine categories. Let's examine each one in the light of recent weeks' data and this week's new findings:
1. Technical AI Readiness: Allowing AI bots in robots.txt, indexability, snippet permissions, submitting XML sitemap to all search engines, Core Web Vitals optimization, broken link and redirect chain cleanup, mobile compatibility, IndexNow protocol implementation, llms.txt file creation, and serving content in crawlable HTML independent of JavaScript. This is the first layer of Osmani's AEO framework from last week: discoverability. Cloudflare Agent Readiness analyzed 200,000 sites and found that 78% have robots.txt but only 4% specify AI preferences. Osmani's agentic-seo open-source audit tool can automatically crawl this category. Critical detail: LLM bots don't execute JavaScript. Your content must be in crawlable HTML.
2. Schema & Structured Data: Organization, Article, FAQ, Author/Person, Product, HowTo, Breadcrumb schemas and entity relationships. AirOps research delivers this week's most concrete evidence: pages with JSON-LD markup have a 38.5% citation rate versus 32% without. Even more striking: sites implementing advanced schema strategies (including entity relationships and comprehensive property coverage) receive 3.2x more AI citations on competitive topics. Ordered headings paired with rich schema correlate with a 2.8x higher citation rate.
3. Content Structure & Citation Readiness: Starting with direct answer summaries (BLUF principle), clear H1-H3 hierarchy, quotable statistics and data points, expert quotes and citations, FAQ sections, key takeaways sections, primary source citations, definition boxes, comparison tables, and content freshness signals. This category maps directly to the AirOps data: heading-query alignment creates a 41% vs. 30% citation rate difference. 500-2,000 words is optimal; 5,000+ words actually decreases the citation rate. Osmani's "first 500 tokens are the golden zone" rule materializes here. New data: 44.2% of all LLM citations come from the first 30% of the text. Quotable claims should be at the top of the page.
4. Content Strategy & Topic Coverage: AI visibility audits for target keywords, content gap analysis against competitors, pillar/cluster content architecture, conversational language and long-tail query optimization, original research and first-party data production, multi-format content (text, video, infographic), and a content refresh schedule (at least quarterly). Indig's April 4 data comes into play here: AI citation is a content architecture issue, not a writing quality issue. New finding: AI citations "decay" in approximately 13 weeks without freshness signals. Regular updates are critical.
5. Brand Entity & E-E-A-T: Brand information consistency across all platforms, Google Business Profile creation/claiming, Wikipedia/Wikidata presence, comprehensive author pages, detailed About page, executive and team thought leadership content, key personnel LinkedIn profile optimization, and industry directory listings. Barnard's "topical ownership" model from last week applies here: coverage gets you into the candidate pool, architecture ensures you're understood, positioning puts you ahead of competitors with entity-level trust signals. New data: 85% of brand mentions come from third-party pages (not owned domains). "Your own site" isn't enough. Your presence across the ecosystem is what matters.
6. Off-Site Reputation & Authority: Brand mention monitoring on forums and review platforms, quality backlinks from AI-crawled sources, active participation in industry discussions, encouraging detailed customer reviews, publishing on high-authority external platforms, and professional responses to negative mentions. Domains with millions of brand mentions on Quora and Reddit have approximately 4x higher citation chances compared to those with minimal activity. Sites with profiles on Trustpilot, G2, and Capterra have a 3x advantage in being selected as a source by ChatGPT. 48% of AI citations come from community platforms like Reddit and YouTube.
7. AI Visibility Monitoring: AI citation tracking setup, competitor citation monitoring, ChatGPT/Perplexity/Gemini referral traffic analysis (in GA4), monthly controlled prompt testing, AI Overviews monitoring in Google Search Console, AI search performance dashboard creation, citation share of voice benchmarking, and analysis of competitors' cited content. Concrete tools exist for this category: HubSpot AEO ($50/month, CRM-backed prompt intelligence, five-dimensional scoring), Webflow AEO (closed-loop measurement and implementation system), and Cloudflare Agent Readiness (free agent readiness score).
At Stradiji, we use GeoGenie.ai for GEO monitoring: the platform regularly runs defined personas and prompts across ChatGPT, Gemini, Perplexity, Claude, and Google AI Overviews. It tracks Visibility Score (how often your brand appears in AI responses), Share of Voice (market share), Citation Rate (source links to your website), and Content Gap (content that AI engines can't find on your site). The Content Gap detection is especially valuable: if AI engines can't find relevant content on your site for a specific query, GeoGenie flags it and generates a content brief. This connects directly to Category 4 (Content Strategy) in our checklist. Multi-platform testing is mandatory: query your target keywords in ChatGPT, Perplexity, Google AI Overviews, and Bing Copilot, and document citation frequency, accuracy, and quality.
8. Page Experience & UX: Serving critical content in crawlable HTML (not JavaScript-dependent), comprehensive internal linking structure, thin and duplicate content cleanup or consolidation, readability and scannability optimization, and descriptive alt text on all images.
9. AI Access & Preview Controls: Reviewing AI training preferences (opt-out decisions), meta tags for AI previews, ai.txt protocol implementation, AGENTS.md file (for codebases), and SKILL.md file (for APIs and services). This maps to the "access control" and "capability signaling" layers in Osmani's AEO framework. Important note: John Mueller stated that no AI crawler has been confirmed to pull information via llms.txt yet. However, the standard is maturing rapidly and early adoption may provide an advantage.
How the Last 4 Weeks of Data Support This List
Mar 28 β AI selects fragments, doesn't rank pages (Princeton/Carnegie Mellon) β Content Structure: direct answer summaries
Apr 4 β 98,000 citation rows: content architecture issue (Indig) β Content Strategy: pillar/cluster architecture
Apr 11 β "Search will become an agent manager" (Pichai) β Technical Readiness: AI bot access
Apr 17 β AEO: token efficiency, llms.txt, AGENTS.md (Osmani) β Technical Readiness + Access Controls
Apr 25 β Heading alignment 41% vs. 30% citation (AirOps) β Content Structure: clear H1-H3 hierarchy
Apr 25 β Schema 3.2x citation, community platforms 48% citation source β Schema + Off-Site Reputation
Apr 25 β "We raised the indexing bar" (Google Toronto) β Content Strategy: original research
Where to Start
The 9 most critical items in this checklist account for the largest share of total impact:
Allow AI bot access in robots.txt
Verify indexability of key pages
Check snippet permissions
Implement Organization schema
Start every page with a direct answer summary
Conduct an AI visibility audit for target keywords
Content gap analysis against competitors
Ensure brand information consistency across all platforms
Set up AI citation tracking
Start with these 9 items. Complete the rest in phases: high-impact items first, then quick wins.
Interactive GEO Scorecard
We turned this checklist into an interactive tool. Use the Stradiji GEO Scorecard to check off 70+ items one by one and see your GEO score update in real time. Identify your gaps, then send us your score. We'll build a GEO roadmap tailored to your site.
Mert's Note
For the past four weeks, we've examined a different face of AI visibility in every issue. This week, all the pieces come together. We merged the research, Google statements, and technical guides into a single action plan. At Stradiji, here's what we tell our clients: AI visibility is no longer a "we'll get to it someday" matter. Data-driven prioritization exists. Measurement tools exist. The time to start is now.
If you're looking at those 70+ items thinking "where do I even start?", take the GEO Scorecard and send us your results. We'll analyze your score against your industry, your current site architecture, and your competitive landscape, then deliver a custom GEO roadmap with clear priorities.
π ChatGPT Ads Shift to CPC: AI Search Engines Become Performance Platforms
OpenAI has started testing cost-per-click (CPC) ads in ChatGPT. The shift from the CPM (cost per impression) model that launched in February to CPC marks a critical inflection point in AI search advertising. (Digiday, The Next Web)
Why the Shift from CPM to CPC?
At launch, the $60 CPM had dropped to $25 in some cases. The impression-based model wasn't sustainable. CPC ties revenue to measurable outcomes.
The Numbers
Cost per click runs $3-5. Minimum spend was reduced from $250,000 to $50,000. On April 15, the self-serve ad manager opened to global advertisers. OpenAI projects 2026 ad revenue at $2.5 billion, 2027 at $11 billion, and 2030 at $100 billion.
Stradiji Note
This development validates the importance of GEO strategy once again. ChatGPT is no longer just an "AI search engine" β it's becoming a performance marketing platform akin to Google Ads. The balance between organic AI visibility (GEO) and paid AI advertising is beginning to take shape, just like the balance between SEO and Google Ads. Here's what we tell our clients: build your organic AI visibility now, because the paid side is growing fast and CPCs will rise. Not sure where to start? Take the GEO Scorecard and send us your results.
π AirOps Research: Heading Alignment Is the Strongest Factor in AI Citations
AirOps' comprehensive research analyzing 16,851 queries and 50,553 responses revealed what actually works in ChatGPT citations with concrete data. (AirOps Report)
Key Findings
Heading alignment is the strongest on-page factor: Pages with the strongest heading match to the query receive citations at a 41% rate, while weak matches stay at 30%.
Focused content beats comprehensive guides: Pages that answer the core query with a narrower, more focused approach outperform comprehensive guides.
Ranking still matters: First-position pages receive citations at 58.4%, while tenth-position pages get just 14.2%.
Optimal length is 500-2,000 words: Pages over 5,000 words receive fewer citations than pages under 500 words.
Structured data makes a difference: Pages with JSON-LD markup have a 38.5% citation rate versus 32% without. Articles with 4-10 subheadings perform best.
Practical Takeaways
This data maps perfectly to last week's AEO discussion. Token efficiency, focused content, and heading-query alignment: the "10,000-word comprehensive guide" approach no longer works for AI citations. Each page perfectly answering a single question is more valuable than superficially covering a broad topic.
π Google Search Central Live Toronto: "The Content Quality Bar Has Risen"
Google Search Central Live was held for the first time in Canada, in Toronto, on April 21. The event featured Danny Sullivan, Martin Splitt, Daniel Waisberg, Annanya Raghavan, and Ryan Levering, with critical messages emerging. (JC Chouinard)
The Most Important Messages
"AI lowered the content creation barrier, so Google raised the indexing bar." This is the clearest admission from Google's own mouth: indexing standards have been tightened in the face of the AI-generated content flood.
"Crawled β Currently not indexed" is rarely a technical issue, usually a quality signal. An important reminder for site owners seeing this status in Search Console: the issue isn't rendering, it's content quality.
"We should track when we rank, not what we rank for." Due to high ranking volatility, point-in-time ranking checks can be misleading.
Original data is the strongest correlation variable for success in recent updates. First-party research and original data continue to be the key to breaking free from the AI slop cycle.
π Anthropic Claude Mythos Preview: The Model Breaking New Ground in Cybersecurity
Anthropic announced Claude Mythos Preview on April 7 but this week's analyses reveal the model's true capabilities. Mythos is a new model tier, larger and more capable than the Opus models. It achieves 93.9% on SWE-bench and 97.6% on USAMO. (Foreign Policy, UK AISI)
The model's most notable capability is in cybersecurity. It became the first AI model to complete "The Last Ones," a 32-step enterprise network attack simulation, end to end. In one case, it wrote a browser exploit chaining four vulnerabilities.
Anthropic is releasing the model as a gated research preview under the name "Project Glasswing," prioritizing defensive cybersecurity use cases. It is not publicly available.
Also at Anthropic This Week
Anthropic briefly removed Claude Code from the $20 Pro plan and then reversed course. The company described it as "an A/B test on 2% of new users." The core issue: subscription plans are priced well below the book value of tokens consumed. It was reversed within 24 hours.
π€ Around the AI World
GPT-5.5 Released: OpenAI released GPT-5.5 on April 23. The first fully retrained base model since GPT-4.5. 1 million token context window, $5 per million input tokens, $30 per million output tokens. Completes the same tasks with fewer tokens compared to GPT-5.4, and delivers the highest performance at half the cost of competitors on the Artificial Analysis Coding Index. Available to Plus, Pro, Business, and Enterprise users. In the same week, ChatGPT Images 2.0 arrived: for the first time, reasoning capabilities are integrated into image generation, offering 2K resolution and multi-image consistency. (TechCrunch, CNBC)
MIT: The "AI Fatigue" Era: MIT Technology Review distinguishes between "AI resistance" (active protest by a minority) and "AI fatigue" (masses quietly disengaging). According to Stanford's 2026 AI Index data, the share of people who think AI products are useful rose to 59%, but the "apprehension" rate also climbed to 52%. The perception gap between experts and the general public keeps widening.
Google Windows App: Google released its desktop app for Windows on April 15. The Alt + Space shortcut searches across the web, local files, apps, and Google Drive. AI Mode is integrated, no Gemini but Google Lens and Voice Search are included. The first "AI Mode-first" desktop search tool.
Google Demand Gen Updates: Google added the ability for retailers to use first-party catalog and conversion data in Demand Gen campaigns, along with view-through conversion (VTC) optimization. Reaching high-intent shoppers across YouTube, Discover, and Gmail is now easier.
SEO Week 2026: The conference runs April 27-30 in New York, bringing together Michael King, Rand Fishkin, Lily Ray, Cindy Krum, and Aleyda Solis. AirOps CEO Alex Halliday's session on "what AI agents reward" is directly connected to this week's research section.
Perplexity Updates: Perplexity launched its Patent research agent, live flight status tracking, an expanded Sports hub with 10 new leagues, and Sora 2 Pro video generation for Max subscribers. The API is now a four-tier platform: Agent API, Search API, Embeddings API, and Sandbox API (coming soon).
β What to Do This Week
1. Review the 9 critical items in the GEO checklist. Check your robots.txt for AI bots, validate your Organization schema, set up citation tracking.
2. Align your page titles with target queries. The AirOps data is clear: heading-query alignment is the strongest on-page factor for AI citations (41% vs. 30%).
3. Consider splitting your long, comprehensive guides into focused pages. The 500-2,000 word range is optimal; 5,000+ words decreases the citation rate.
4. Put ChatGPT CPC ads on your radar. The self-serve ad manager is live, minimum spend is down to $50,000. Inform your clients.
5. Evaluate "Crawled β Currently not indexed" pages in Search Console as a quality signal, not a technical issue.
6. Check your brand presence on Quora, Reddit, Trustpilot, and industry forums. Community platforms are the source of 48% of AI citations.
7. Identify and refresh content older than 13 weeks. AI citations decay without freshness signals.
π Closing
Over the past four weeks we examined the science, data, vision, and technical infrastructure of AI visibility. This week we brought it all together in a nine-category checklist. The formula is clear: technical readiness + structured data + heading alignment + focused content + brand presence + community signals + regular updates = becoming the selected source in AI search.
AI search engines aren't just serving information anymore β they're becoming ad platforms. Google is raising the indexing bar. Community platforms are feeding nearly half of all citations. The shared message is clear: low-quality content gets filtered out; focused, verifiable, and ecosystem-strong content gets rewarded.
See you next week with insights from SEO Week 2026. Take care!
Mert Erkal
Stradiji | SEO, GEO & Conversion Optimization
Support the newsletter: if you find this content useful, buy me a coffee. β
About Mert Erkal Founder of Stradiji. 15+ years in SEO and GEO consulting for corporate clients globally. Author of SEOs Diners Club (English) and Dijital Pazarlama NotlarΔ± (Turkish).
Newsletter: seosdinersclub.beehiiv.com
Twitter: twitter.com/merterkal
LinkedIn: linkedin.com/in/merterkal
YouTube: youtube.com/@stradiji