The 20 Queries Inside a Mythos AEO Audit

— By Christopher Lynch

A look at exactly what gets tested when Mythos runs your audit. Category queries, direct queries, and competitor-intent queries — and why each one matters.

The 20 queries inside a Mythos AEO audit

Every Mythos audit runs 20 buyer-intent queries live against ChatGPT, Claude, and Gemini. That is 60 total responses analyzed per audit — one per query per engine.

The 20 queries are not generic SEO keywords. They are calibrated to a specific buyer's search behavior: the kinds of prompts someone actually types into an AI answer engine when shortlisting vendors in your category.

Here is what the 20 break down into, and why.

Five buckets

Every audit pulls queries from five buckets, calibrated to your vertical and company type during intake:

Category queries (8 queries)

The largest bucket. Questions a buyer asks when entering your category without yet having a vendor in mind.

Examples:

"Best [category] tools in 2026" "Top [category] vendors for [buyer type]" "What tools do [your audience] use for [job to be done]" "Alternatives to [dominant incumbent]"

Why it matters: These are the queries where first-mover AEO investment pays off. If the engine cites you here, you enter the shortlist before the buyer has even clicked to a site.

Direct queries (3 queries)

Asking the engine specifically about your brand by name.

Examples:

"What is [your brand]?" "What does [your brand] do?" "How does [your brand] work?"

Why it matters: This is the cleanest test of your entity signal. If the engine hallucinates (wrong founder, wrong category, wrong features), you have an entity-signal gap. If it answers accurately, your LinkedIn + schema + public-byline work is paying off.

Competitor-intent queries (4 queries)

Questions a buyer asks when specifically considering a competitor.

Examples:

"Alternatives to [named competitor]" "[Competitor] vs [their competitor]" "Is [competitor] worth it?" "Open-source alternatives to [competitor]"

Why it matters: These are the highest-intent queries in the set. If an engine surfaces you when a buyer is already shopping for a competitor, you are winning against their closest consideration set.

Use-case queries (3 queries)

Questions framed around the specific job to be done.

Examples:

"How do I [specific task your product solves]?" "Best way to [outcome]" "What software helps with [specific pain point]?"

Why it matters: Use-case queries are where authority content and how-to guides get retrieved. Brands that publish explainer content on their specific problem space often win these even when their category queries are weak.

Ecosystem queries (2 queries)

Questions about your industry or market segment broadly, without a product-purchase intent.

Examples:

"What is the current state of [industry]?" "How is [technology trend] changing [industry]?"

Why it matters: Ecosystem queries are thought-leadership proxies. Being named in the context of broad industry analysis signals to the engine that your brand is a credible voice in the category, which in turn lifts your retrieval on commercial queries.

The calibration

The 20 queries are not identical for every customer. Before the audit runs, Mythos calibrates the set based on:

Vertical. Legal tech, B2B SaaS, DTC, and professional services each get distinct query templates. Company stage. An early-stage startup's baseline is weaker than an established vendor's, so the audit compares against a stage-appropriate baseline. Named competitors. The customer can submit up to five competitors during intake; the audit also surfaces competitors the engines name that the customer did not list.

The engines tested are always the same — ChatGPT (GPT-4o), Claude (Sonnet 4.6), Gemini (2.5 Flash) — so audits are directly comparable month over month for Mythos Monitor subscribers.

Why 20 and not 50

A larger query set would give more statistical confidence, but the returns diminish fast. Twenty queries across three engines produces 60 responses, which is enough to see:

Mention rate per engine with reasonable confidence Share of voice vs. competitors Hallucination patterns (they repeat across queries when they exist) Category vs. direct vs. competitor-intent separation

Adding more queries inflates cost per audit without improving the playbook quality. Mythos is calibrated to return the maximum signal-per-dollar for a one-time $299 audit, not the maximum signal regardless of cost.

How this feeds the playbook

Every query result becomes a data point in the playbook. Missing mentions across category queries → flagged as a content or structured-data gap. Hallucinated identities on direct queries → flagged as an entity-signal gap. Competitor takeovers on competitor-intent queries → flagged as a comparison-content or citation gap.

The playbook you receive is ranked by how many queries a fix would lift and how much implementation effort it costs.

Run the full $299 Mythos audit to see the 60 live responses, the playbook, and the verified site-forensic snapshot for your brand.

Loading…

If this page stays blank, your browser may be too old to run it. Open this link in a recent Chrome or Safari, or update your browser.

Questions? [email protected]