Say hello to the new Saazy! See what’s new ✨

When Wikipedia Fails: Three “Known‑but‑Never‑Named” Case Studies (From a 17‑LLM Benchmark)

Date

Author

Most execs hear the same advice: “Get a Wikipedia page. Then AI will know you.”

That advice is incomplete.

In a benchmark of 1,000 entities tested across 17 chatbots/LLMs (with 5,070 total prompts and 86,190 total answers), we found a group of entities that were recognized by many models but never showed up by name in any answers. Some of them do have Wikipedia pages.

That’s the key point:

Mentions are driven by three forces:

The executive takeaway

If you care about AI visibility search, recommendations, buyer research, or “what comes up in the chat window” this distinction matters.

What we measured (in plain terms)

We tracked two different behaviors:

1) Recognition (“Do you exist in the model’s head?”)

When we asked models about an entity directly, many could identify it.

2) Mention (“Will the model actually say your name?”)

When we asked practical questions where naming would be useful many entities never appeared by name at all.

This is where “Wikipedia fails.” Not because Wikipedia is useless, but because recognition is not the same job as retrieval and recommendation.

Why Wikipedia sometimes doesn’t translate into mentions

Reason 1: Topic–category mismatch

The question pushes the model into explain mode, not name mode.
If a question asks:

…the model often answers with concepts, steps, and warnings. It does not give brand names unless the question clearly asks for them.

Reason 2: Default recommendation gravity

Even when naming is allowed, models tend to pick from a small set of “safe defaults”:

This crowding effect is real. Many legitimate, well‑documented entities get pushed out.

A note on Wikipedia itself: “Not all pages are equal signals”

A Wikipedia page can prove an entity exists. But pages differ a lot in:

In other words: a Wikipedia page can be a great identity card, but a weak “when to mention this” trigger.

Three case studies: recognized, Wikipedia’d, and still invisible in answers

These four entities were recognized by many models, yet had zero mentions across the questions asked about them.

Case Study 1: National Pork Board

What happened:

In that world, the model’s safest move is to talk about:

A trade board does not get named unless the question is explicitly about:

Why Wikipedia didn’t help:

What would likely change the outcome:
Questions that invite organizations by name, like:

Case Study 2: Janus Henderson Investors

What happened:

Even when a firm is real and documented, it can get crowded out by:

Why Wikipedia didn’t help:

What would likely change the outcome:
Questions that reduce the “default list” effect, like:

Case Study 3: PolicyBazaar

What happened:
Insurance questions often trigger process advice:

Models also tend to stay generic unless the user anchors the context (especially by country). PolicyBazaar is strongly tied to a specific market context; if the question set didn’t lock that in, the model stays neutral.

Why Wikipedia didn’t help:

Wikipedia doesn’t supply the missing trigger. The trigger is usually something like:

Without that, the model can answer the question without naming anyone.

What would likely change the outcome:
More specific prompts, like:

What these cases prove (the strategy point)

A Wikipedia page can help you get into the model’s memory. But mentions are a retrieval and recommendation problem.

To be named, you need two things:

A C‑suite checklist: how to move from “known” to “named”

1) Decide the questions you must win

Write down the 10–20 real buyer questions where your name should come up.
If you can’t write those questions clearly, the model won’t reliably retrieve you.

2) Make “category triggers” unavoidable

Across your public footprint (and third‑party coverage), tie your name to:

Not slogans. Plain language.

3) Reduce name variance

Models are sensitive to string differences. Choose the canonical form of your name and use it consistently.

4) Earn presence in “default list” sources

Models learn from what is repeated in high‑trust places:

If you’re absent there, the model will often choose safer, more common names.

5) Test like a product, not like PR

Don’t ask “does the model know us?”
Ask “does the model name us in the exact buyer questions we care about?”
That’s the metric that drives demand.

The new executive question

The question is no longer: “Do we have a Wikipedia page?”
It’s:

Because in the AI era, being recognized is table stakes. Being named is the win.

Leave a Reply

Your email address will not be published. Required fields are marked *