
Leanid Palhouski
Insights
—
Apr 16, 2026
TL;DR — Five Things That Changed Today
Claude Opus 4.7 shipped April 16, 2026. It is an agentic, code-first AI search engine — not a chatbot upgrade. For GEO and AEO practitioners, five shifts are urgent and actionable today.
1. Claude now writes code to filter search results before they enter its context window. Only quotable, self-contained, fact-dense fragments survive.
2. robots.txt errors are deleting brands from AI answers. 79 % of top news sites accidentally block Claude-SearchBot while targeting only training crawlers.
3. Citation is now contractually mandatory in every product built on the Claude API — attribution is infrastructure, not courtesy.
4. Community platforms (Reddit, Quora) capture 52.5 % of AI citations — more than brand domains. Third-party presence is now the primary AEO surface.
5. The optimal cited passage is 134–167 words, self-contained, with explicit entity names, numbers, and dates. No pronouns pointing backwards.
What Opus 4.7 Is (and Is Not)
Anthropic shipped Claude Opus 4.7 on April 16, 2026 — a same-price upgrade to Opus 4.6 ($5 / M input tokens, $25 / M output tokens, API name claude-opus-4-7). The headline is not conversational quality. It is an agentic, long-horizon agent that fetches, filters, verifies, and cites the web on its own. Claude traffic has grown roughly 5× year-over-year; eight of the Fortune 10 are now Claude customers; and Anthropic models power Claude.ai, GitHub Copilot, Cursor, Perplexity workloads, Notion Agent, Devin, and Microsoft Foundry.
For brands and publishers, the consequence is structural: optimizing for a single ’AI search engine’ is obsolete. You are now optimizing for a fleet of reasoning agents that do their own retrieval, write their own filtering code, and cite only what survives that process.
What Shipped on April 16, 2026
Core Model Upgrades
Feature | Opus 4.6 | Opus 4.7 | Delta |
|---|---|---|---|
Vision resolution | 1.15 MP | 3.75 MP | +226 % |
SWE-bench Pro | – | 64.3 % | ↑ from 53.4 % |
SWE-bench Verified | – | 87.6 % | New SOTA |
GPQA Diamond | – | 94.2 % | – |
MCP-Atlas (tool use) | – | 77.3 % | Leads field |
MMMLU | – | 91.5 % | – |
BigLaw Bench (Harvey) | – | 90.9 % | – |
BrowseComp (agentic search) | – | 79.3 % | ⚠ Loses to GPT-5.4 |
New API-Level Capabilities
Capability | What It Does | GEO / AEO Relevance |
|---|---|---|
xhigh effort level | Slots between high and max reasoning depth | More rigorous citation filtering |
Task budgets (beta) | Token allowance across entire multi-step agent loop | Agents can run 20–50 sub-queries per user question |
New tokenizer | 1.0–1.35× more tokens per same input | Effective cost rises for text-heavy pages |
web_search_20260209 | Writes & executes code to filter results before context | Code-based pre-filtering — see below |
Multi-session memory | Remembers notes across long agentic runs | Brand consistency across sessions compounds |
Benchmark Comparison — Opus 4.7 vs. Competitors (April 2026)
Metric | Value |
|
|---|---|---|
SWE-bench Pro (Opus 4.7) | 64.3 % | ▲ |
SWE-bench Pro (GPT-5.4) | 58.1 % |
|
BrowseComp (GPT-5.4) | 89.3 % | ▲ |
BrowseComp (Opus 4.7) | 79.3 % |
|
GPQA Diamond (Opus 4.7) | 94.2 % | ▲ |
MCP-Atlas (Opus 4.7) | 77.3 % | ▲ |
MCP-Atlas (GPT-5.4) | ~71 % |
|
How Claude Now Reads the Web
Code-Based Pre-Filtering Is the Biggest GEO Shift
With web_search_20260209, Opus 4.7 can write Python to filter, rank, and summarize search results before any page content touches the context window. Combined with a 3× research-efficiency gain (top overall score 0.715 across six modules; General Finance jumped 0.767 → 0.813), the practical effect is stark: Claude reads less of your page but reasons harder about which fragments to keep.
Claude no longer skims your whole page. It extracts candidate chunks, writes a filter, runs it, and reasons only on survivors. Unstructured pages return zero chunks.
The Three-Crawler Architecture Every Publisher Must Know
Crawler | robots.txt User-Agent | Purpose | Block Consequence |
|---|---|---|---|
ClaudeBot | ClaudeBot | Training data collection | Excluded from future model training |
Claude-User | Claude-User | Live user query page fetch | Brand invisible in real-time answers |
Claude-SearchBot | Claude-SearchBot | Search index for Claude search feature | Reduced visibility & accuracy in user search results (Anthropic’s words) |
A BuzzStream analysis found 79 % of top news sites block ≥1 AI retrieval bot. Most block ClaudeBot with a wildcard that accidentally catches Claude-SearchBot — deleting themselves from Claude answers.
Citation Is Now Contractually Required
The Claude API now explicitly requires: "when displaying API outputs directly to end users, citations must be included to the original source." Every citation block returns url, title, cited_text, encrypted_index, page_age. The page_age field means dynamic filtering can explicitly prefer recently updated pages — a direct ranking signal for AEO.
What Changed for GEO (Generative Engine Optimization):
Authority Compounds Through Agents, Not Rankings
With task budgets and xhigh effort, Opus 4.7 makes it economically viable to run agents that issue 20–50 queries per user question, cross-check sources, and cite only the cleanest passage. Research from Princeton, Georgia Tech, and the Allen Institute shows optimized content scores 40+ on AI visibility metrics versus 19.3 for unoptimized — gains exceeding 100 % — primarily by adding authoritative citations, statistics, and improving fluency.
Chunk-Level Formatting Beats Page-Level SEO
OtterlyAI’s analysis of over 1 million citations across ChatGPT, Perplexity, and Google AI Overviews found the optimal cited passage is 134–167 words, self-contained, fact-dense, and directly answers a question. Brand mentions on third-party platforms now show a 3× stronger correlation with AI visibility than backlinks.
Citation Source Distribution — AI Search Engines (2026)
llms.txt Moves From Speculative to Table-Stakes
Anthropic co-developed the llms.txt and llms-full.txt standard with Mintlify. Over 844,000 sites have implemented it as of late 2025. Anthropic, Cloudflare, and Stripe use it for their own docs. Google included an llms.txt file in its Agent-to-Agent (A2A) protocol. With Opus 4.7’s code-based filtering, an llms.txt pointing to concentrated, fact-dense pages becomes the rational target for Claude’s pre-filter pass.
What Changed for AEO (Answer Engine Optimization):
Claude Now Refuses Plausible-but-Wrong Answers
Hex’s evaluation noted Opus 4.7 "correctly reports when data is missing instead of providing plausible-but-incorrect fallbacks." The practical meaning: if your page contradicts higher-authority sources, Opus 4.7 is more likely than prior Claude versions to exclude you entirely rather than average you in. Brand consistency across Wikipedia, Reddit, LinkedIn, earned media, and your own domain now functions as a hard filter, not a soft signal.
Content Freshness Is Now Machine-Readable
Opus 4.7’s web search returns a page_age field with every result. Dynamic filtering can explicitly prefer recent pages. Content with clear ’Last updated’ dates, explicit year markers in headings, and versioned facts passes this filter more reliably.
The BrowseComp Gap: Claude vs. ChatGPT AEO Strategy
AI Search Engine | BrowseComp Score | Retrieval Style | AEO Bet |
|---|---|---|---|
ChatGPT (GPT-5.4) | 89.3 % ✓ LEADS | Aggressive multi-hop browsing | Deep research assets, long-form authoritative pages |
Claude Opus 4.7 | 79.3 % | Code-filtered tighter retrieval | Clean atomic answers, 134–167 words, structured |
Perplexity | Routes via Claude | Domain-emphasis citations | Strong domain authority + Claude rules apply |
Gemini 3.1 Pro | Relies on own index | Index-integrated breadth | Standard structured data + schema markup |
The GEO/AEO Playbook for Claude Opus 4.7
Action | Why It Matters for Opus 4.7 | Priority |
|---|---|---|
Audit robots.txt by user-agent | Allow Claude-User and Claude-SearchBot separately from ClaudeBot. 79 % of sites accidentally self-exclude from Claude answers. | 🔴 URGENT |
Publish atomic answer blocks (134–167 words) | Opus 4.7’s code filter extracts exactly this shape. Each block must have explicit entity names, numbers, and dates — no backward-pointing pronouns. | 🔴 HIGH |
Ship llms.txt and llms-full.txt | Anthropic’s own toolchain uses the format as first-class. Cost is trivial; reward is the pre-filter pass pointing to your best pages. | 🟡 HIGH |
Treat third-party presence as primary AEO surface | Brand mentions on Reddit, Quora, Wikipedia, LinkedIn, and YouTube correlate 3× more with AI visibility than backlinks. | 🟡 HIGH |
Add ’Last updated’ dates and year markers to headings | page_age field in Opus 4.7’s search tool makes freshness a machine-readable ranking signal. | 🟡 MEDIUM |
Measure per-engine, not per-’AI’ | Claude-visible and ChatGPT-visible content will diverge more after Opus 4.7’s BrowseComp gap. Use Profound, Peec, or Otterly per-engine. | 🟢 MEDIUM |
Add branded visual assets (logo lockups, product screenshots) | 3× vision resolution means Opus 4.7 reads logos and screenshot text at pixel fidelity — branded visuals now contribute to entity resolution. | 🟢 LOWER |
Where Opus 4.7 Sits in the AI Search Landscape
Anthropic has chosen a distinct lane. Where OpenAI’s ChatGPT Search (GPT-5.4) and Perplexity emphasize breadth of browsing, and Google’s Gemini 3.1 Pro emphasizes integration with the index it already owns, Anthropic is building around long-running, tool-using agents that treat the web as a callable API.
That shows up in MCP-Atlas (Opus 4.7 leads at 77.3 %), in the Model Context Protocol (MCP) ecosystem Anthropic open-sourced, and in Claude Code’s reported $2.5 billion annualized run rate. Critically, Perplexity routes a substantial share of its traffic through Claude models — an Opus 4.7 behavioral change often cascades into Perplexity answers days later without any announcement.
Engine | Citation Style | Retrieval Method | Claude Dependency |
|---|---|---|---|
Claude (Opus 4.7) | Short quoted passages, heavily filtered | Code-executed pre-filter + agentic loop | Native |
ChatGPT Search | Clickable links, breadth-first | Aggressive multi-hop BrowseComp | None |
Perplexity | Domain-emphasis citations | Multi-engine with Claude routing | High (routes traffic) |
Google AI Mode / Gemini | Brand visibility emphasis | Owned index integration | None |
FAQ
What is Claude Opus 4.7?
Claude Opus 4.7 is Anthropic’s flagship model released April 16, 2026. It is a same-price upgrade to Opus 4.6 at $5 per million input tokens and $25 per million output tokens. It introduces agentic task budgets, a new web search tool with code-based result filtering, 3.75-megapixel vision (tripled from 1.15 MP), a new xhigh reasoning effort level, and a new tokenizer that produces 1.0–1.35× more tokens per input.
What is Generative Engine Optimization (GEO)?
GEO is the discipline of structuring web content so it is retrieved, cited, and quoted by AI-powered answer engines such as Claude, ChatGPT Search, Perplexity, and Google AI Overviews — rather than merely ranked in traditional keyword search. Research published at arxiv.org/abs/2416.09980 shows GEO-optimized content can exceed unoptimized content by 100 %+ on AI visibility metrics.
What is the biggest GEO change in Opus 4.7?
The biggest change is code-based pre-filtering via the web_search_20260209 tool. Claude can now write and execute Python to filter, rank, and summarize search results before any content enters the context window. Only self-contained, fact-dense fragments survive. Pages structured around long narrative prose are filtered out before the model ever reasons about them.
How do I keep my site visible in Claude search results?
Audit your robots.txt file and allow Claude-User and Claude-SearchBot as separate user-agents, even if you block ClaudeBot for training data purposes. Then publish atomic answer blocks of 134–167 words under descriptive H2/H3 headings, with explicit entity names, statistics, and dates in every passage. Add an llms.txt and llms-full.txt file pointing to your canonical, fact-dense pages.
Why are Reddit and Quora more important than my own domain for AEO?
OtterlyAI’s analysis of over one million citations across ChatGPT, Perplexity, and Google AI Overviews found that community platforms capture 52.5 % of all AI citations versus 47.5 % for brand domains. Brand mentions on third-party community platforms correlate 3× more strongly with AI visibility than backlinks. This means earning discussion and brand mentions on community platforms is now a primary AEO strategy, not a secondary one.
Does Claude Opus 4.7 beat GPT-5.4 on AI search?
On most benchmarks, yes — Opus 4.7 leads on SWE-bench Pro (64.3 % vs. ~58.1 %), GPQA Diamond (94.2 %), and MCP-Atlas tool use (77.3 %). However, GPT-5.4 leads on BrowseComp — the agentic browsing benchmark most directly relevant to complex multi-hop search queries — scoring 89.3 % versus Opus 4.7’s 79.3 %. This means ChatGPT Search will still outperform Claude at surfacing deep-linked answers for complex research queries.
What is llms.txt and do I need it?
llms.txt is a Markdown-formatted file placed at your domain root that gives AI agents a clean, structured entry point to your most authoritative content. Anthropic co-developed the standard and uses it for its own documentation. Over 844,000 sites have implemented it. With Opus 4.7’s code-based filtering, an llms.txt pointing to fact-dense pages becomes the rational first target for Claude’s pre-filter pass. Implementation cost is trivial. Learn more at llmstxt.org.
How does Wrodium help with GEO and AEO?
Wrodium builds AI SEO/GEO infrastructure for brands that need to be visible in AI-generated answers across ChatGPT, Claude, Perplexity, and Google AI Overviews. Our platform audits crawler permissions, structures content into AI-retrievable atomic blocks, and tracks per-engine citation share-of-voice. Learn more at wrodium.com or read our research paper at arxiv.org/abs/2509.10762.
Conclusion: The Agent Is the New SERP
Opus 4.7 is, on its face, a coding and agentic-workflow upgrade. Read closely, it is the clearest signal yet that AI search is bifurcating from human search. The user no longer reads ten blue links; an agent reads hundreds of pages, writes filtering code, cross-checks with memory, and returns three cited quotes. The winners over the next two quarters will not be the sites with the most content. They will be the sites that write like databases, cite like journals, and publish like APIs.
Sites that restructured around atomic answers, clean crawler permissions, and cross-platform brand consistency in 2025 will compound their advantage. Sites still running a 2023 SEO playbook are, by Anthropic’s own filter logic, being read and discarded before they ever reach the model.
© 2026 Wrodium · wrodium.com ·
Insights
—
Apr 16, 2026
Claude Opus 4.7 & AI Search: What Actually Changed for GEO & AEO & AI SEO

Leanid Palhouski
—
Mar 11, 2026
WebMCP for Marketers: The Protocol That Makes Your Website Agent-Ready


Arlen Kumar
Leanid Palhouski
Insights
—
Mar 3, 2026
Wrodium Wins Most Innovative Technology at Berkeley SkyDeck Pitch Competition


Leanid Palhouski
Arlen Kumar



