We Audited Ourselves. Mythos Scored 8/100.

— By Christopher Lynch

Mythos is a productized AI visibility audit. So when we wondered whether we practice what we preach, we ran the audit on ourselves. The score came back 8/100 with 0% share of voice. Here is what we found and what we are fixing first.

We audited ourselves. Mythos scored 8 out of 100.

Mythos sells a $299 AI visibility audit. So a reasonable question from any prospect is: does Mythos itself show up in the engines it audits?

This week we ran the full Mythos audit against mythosreport.com. Same engines, same 20 buyer-intent queries, same scoring rubric a paying customer gets. No sandboxing, no shortcuts.

The headline score came back 8 out of 100 with 0% share of voice.

This post is the full teardown. What the audit found, what we are fixing first, and what it means for anyone running an AEO (Answer Engine Optimization) audit on their own brand.

What we actually measured

The Mythos audit runs 20 queries live against three AI answer engines — ChatGPT (GPT-4o), Claude (Sonnet 4.6), and Gemini (2.5 Flash). Every query runs against all three engines, and every response is parsed for brand mentions, prominence, and sentiment.

For mythosreport.com, we tested queries in two buckets:

Category queries — buyer-intent questions about AI visibility auditing, AEO tools, and competitor discovery. Queries like "best AI visibility audit tools in 2026" and "what tools measure brand visibility in ChatGPT." Direct queries — asking each engine about Mythos by name. "What do you know about Mythos the AI visibility audit tool?"

Both buckets told us something different.

Headline findings

Zero organic visibility. Across 15 category queries, Mythos was not mentioned once by any engine. Not as a top recommendation, not as a footnote. When buyers ask ChatGPT or Claude or Gemini for AI visibility audit tools, the engines surface Profound, Otterly AI, Peec AI, Athena HQ, and Bluefish AI. They do not surface Mythos.

Three different hallucinated identities. When we asked the engines about Mythos directly, each gave us a different hallucinated answer. One engine invented a founder name. Another described a product built on a dataset we do not use. A third classified Mythos as MLOps tooling — a completely different category.

Profound wins the category by a wide margin. Profound was mentioned 12 times across our category queries. Otterly AI showed up 9 times. Mythos showed up 0 times.

That last number is the one that matters. Not because 0 is humiliating (it is, a little) but because it tells us exactly what to fix. A brand with weak schema and thin entity signals is invisible to AI answer engines in a way that is very different from being invisible to Google search. Search ranks you somewhere. Answer engines just do not return you.

Why the score is 8 and not 0

The Mythos scoring rubric includes technical site-forensic signals in addition to mention rate. We got partial credit for:

Clean JSON-LD structured data on the homepage — Organization, WebSite, Service, FAQPage blocks are all present and valid. Meta tags, Open Graph, Twitter Cards — full coverage, no missing fields. robots.txt and sitemap.xml — present and correct. AI crawler allow-lists are in place (GPTBot, ClaudeBot, PerplexityBot, etc.). Wikipedia and LinkedIn entity signals — Intuitive Context has a LinkedIn company page. Mythos does not yet have a Wikipedia entry.

Those signals keep the score from zero. They do not pull a brand out of AEO invisibility on their own.

The hallucinations, verbatim

We are including these because any customer running a Mythos audit on themselves will see similar patterns. Hallucinations are not a bug of the engines. They are what engines do when they have nothing concrete to retrieve — they generate a plausible-sounding answer from adjacent patterns.

One engine, asked about Mythos, responded with a confident description of a product positioned around a specific proprietary dataset. We do not have that dataset. We do not mention that dataset anywhere on our site. The engine invented it because "AI visibility audit company" pattern-matches to "data product company" in its training distribution.

Another engine gave us a clean honest answer — "I do not have specific information about a product called Mythos." That is actually the best possible outcome when we have no entity presence yet. Honest unknowing beats confident hallucination every time.

A third engine classified Mythos as MLOps tooling and started describing features from Weights and Biases. Wrong category, wrong product, right vendor-class pattern.

These are the kinds of findings every Mythos audit surfaces. For a paying customer, they are the early evidence that their brand does not have a strong enough entity signal to survive retrieval.

What we are fixing first

The full audit produced a playbook of 7 prioritized actions. The top five (Lane A, where we own the work):

Ship crawler-visible content on the homepage. The mythosreport.com homepage was rendering a "Loading..." heading as the first H1 crawlers saw, because the React SPA had not hydrated. Fixed this week — the first H1 is now the real product headline, and server-side rendered content matches what visitors see. Build /vs comparison pages. /vs/profound, /vs/otterly-ai, /vs/peec-ai — each one is structural (delivery model, output, review process) and designed to rank on "Mythos vs X" queries. Shipped this week. Adopt AEO terminology consistently. The page copy now uses "Answer Engine Optimization (AEO)" as the primary category term, and meta descriptions mention AEO explicitly. Shipped this week. Expand structured data. Added Product and SoftwareApplication JSON-LD on the homepage — schema types Google and AI engines both read to understand "this is a named product, not a service listing." Shipped this week. Publish authority content. Three flagship pieces (this post is one of them) plus two supporting briefs. Shipping this week and next.

Lane B (the work a customer owns externally):

G2, Capterra, TrustRadius, and ProductHunt listings. These are third-party citation pages that AI engines read as entity signals. We have a runbook drafted; these submissions happen over the next two weeks. Wikipedia, LinkedIn, and entity-registry signals. Wikipedia will not accept a product entry at our current stage, but we can strengthen LinkedIn company-page attributes, add Mythos to Intuitive Context's about copy, and layer in schema-for-schema cross-linking.

The 30-day re-audit

The entire point of AEO is that the engines will not immediately reflect your fixes. Retrieval is stochastic and caches are cold. We expect structural fixes to start showing in engine responses in 14 to 60 days.

We will re-run the same 20 queries in 30 days and publish the delta.

If the score does not move by at least 10 points we will publish the reason why, not pretend it moved. The transparency is the product.

What this means for you

If you are running a Mythos audit on your own brand, three things we learned doing this to ourselves:

The score is a diagnostic, not a grade. The playbook is the product. We built Mythos because founder-reviewed fix roadmaps are more useful than dashboards that tell you a number without telling you what to do. Hallucinations are signal. When an engine makes up a founder name or a dataset or a category, that is specific evidence of missing entity signals. Each hallucination maps to a fixable gap. Fixes compound. Schema plus citation plus authority content plus /vs pages — each one alone is small, but together they reshape what the engine retrieves for you.

Read the full Mythos report for mythosreport.com.

Or run the free one-query proof on your own brand.

Mythos is a productized AI visibility audit from Intuitive Context Consulting. $299 one-time, typically same-day delivery, always within 72 hours. A 7-day unconditional refund applies to every audit.

Loading…

If this page stays blank, your browser may be too old to run it. Open this link in a recent Chrome or Safari, or update your browser.

Questions? [email protected]