Most execs hear the same advice: “Get a Wikipedia page. Then AI will know you.”
That advice is incomplete.
In a benchmark of 1,000 entities tested across 17 chatbots/LLMs (with 5,070 total prompts and 86,190 total answers), we found a group of entities that were recognized by many models but never showed up by name in any answers. Some of them do have Wikipedia pages.
That’s the key point:
- Wikipedia can help an LLM recognise you. It does not guarantee the LLM will mention you.
Mentions are driven by three forces:
- What the user asks,
- What “category trigger” the question creates, and
- The model’s “safe default” shortlist.
The executive takeaway
- “Known” is not the same as “Named.”
- In our dataset, 319 entities were “known” but never named.
- Put simply: about 1 in 3 entities that models recognized still didn’t get mentioned.
If you care about AI visibility search, recommendations, buyer research, or “what comes up in the chat window” this distinction matters.
What we measured (in plain terms)
We tracked two different behaviors:
1) Recognition (“Do you exist in the model’s head?”)
When we asked models about an entity directly, many could identify it.
2) Mention (“Will the model actually say your name?”)
When we asked practical questions where naming would be useful many entities never appeared by name at all.
This is where “Wikipedia fails.” Not because Wikipedia is useless, but because recognition is not the same job as retrieval and recommendation.
Why Wikipedia sometimes doesn’t translate into mentions
Reason 1: Topic–category mismatch
The question pushes the model into explain mode, not name mode.
If a question asks:
- “How does this work?”
- “What are the rules?”
- “What should I consider?”
…the model often answers with concepts, steps, and warnings. It does not give brand names unless the question clearly asks for them.
Reason 2: Default recommendation gravity
Even when naming is allowed, models tend to pick from a small set of “safe defaults”:
- Biggest brands,
- Most common global institutions,
- Names that appear constantly across the web.
This crowding effect is real. Many legitimate, well‑documented entities get pushed out.
A note on Wikipedia itself: “Not all pages are equal signals”
A Wikipedia page can prove an entity exists. But pages differ a lot in:
- How clearly they define category (“what it is”),
- How specific the name is (unique vs generic),
- And how strongly the page connects to common buyer tasks.
In other words: a Wikipedia page can be a great identity card, but a weak “when to mention this” trigger.
Three case studies: recognized, Wikipedia’d, and still invisible in answers
These four entities were recognized by many models, yet had zero mentions across the questions asked about them.
Case Study 1: National Pork Board
- Recognized by: 16 of 17 models
- Mentions in answers: 0 (across 10 questions)
What happened:
- The related questions were nutrition and food questions things like “good sources of protein” or “healthy meats.”
In that world, the model’s safest move is to talk about:
- Protein types,
- Nutrition basics,
- Health cautions,
- General guidance.
A trade board does not get named unless the question is explicitly about:
- Industry groups,
- Marketing boards,
- “Who promotes pork,”
- “Who funds research and promotion.”
Why Wikipedia didn’t help:
- Wikipedia helps the model recognize the organization. But the questions didn’t create a reason to name it.
What would likely change the outcome:
Questions that invite organizations by name, like:
- “Which organizations promote pork in the U.S.?”
- “Who runs pork industry marketing and research?”
Case Study 2: Janus Henderson Investors
- Recognized by: 13 of 17 models
- Mentions in answers: 0 (across 10 questions)
What happened:
- In investing questions, models are pulled toward a “default list.” Think: the names that are everywhere, that feel low‑risk to mention, and that readers already know.
Even when a firm is real and documented, it can get crowded out by:
- The biggest index brands,
- The most common brokers,
- The most talked‑about institutions.
Why Wikipedia didn’t help:
- Wikipedia can say “this firm exists.” It does not force the model to choose it over the default list especially if the question is broad.
What would likely change the outcome:
Questions that reduce the “default list” effect, like:
- “Name global active managers beyond the biggest index providers.”
- “List major asset managers with global reach that are not household defaults.”
Case Study 3: PolicyBazaar
- Recognized by: 13 of 17 models
- Mentions in answers: 0 (across 10 questions)
What happened:
Insurance questions often trigger process advice:
- Compare quotes,
- Review coverage,
- Check exclusions,
- Talk to a licensed agent.
Models also tend to stay generic unless the user anchors the context (especially by country). PolicyBazaar is strongly tied to a specific market context; if the question set didn’t lock that in, the model stays neutral.
Why Wikipedia didn’t help:
Wikipedia doesn’t supply the missing trigger. The trigger is usually something like:
- “In India,”
- “Insurance comparison marketplace,”
- “Best sites to compare insurance prices.”
Without that, the model can answer the question without naming anyone.
What would likely change the outcome:
More specific prompts, like:
- “What are the best insurance comparison sites in India?”
- “Which marketplaces help compare health insurance prices in India?”
What these cases prove (the strategy point)
A Wikipedia page can help you get into the model’s memory. But mentions are a retrieval and recommendation problem.
To be named, you need two things:
- Questions that invite naming, and
- Content signals that connect your name to the exact tasks buyers ask about, strong enough to beat the default list.
A C‑suite checklist: how to move from “known” to “named”
1) Decide the questions you must win
Write down the 10–20 real buyer questions where your name should come up.
If you can’t write those questions clearly, the model won’t reliably retrieve you.
2) Make “category triggers” unavoidable
Across your public footprint (and third‑party coverage), tie your name to:
- The category label,
- The use case,
- And the region.
Not slogans. Plain language.
3) Reduce name variance
Models are sensitive to string differences. Choose the canonical form of your name and use it consistently.
4) Earn presence in “default list” sources
Models learn from what is repeated in high‑trust places:
- Credible rankings
- Analyst notes
- Trade publications
- Standards bodies
- Reputable directories
If you’re absent there, the model will often choose safer, more common names.
5) Test like a product, not like PR
Don’t ask “does the model know us?”
Ask “does the model name us in the exact buyer questions we care about?”
That’s the metric that drives demand.
The new executive question
The question is no longer: “Do we have a Wikipedia page?”
It’s:
- “Do our buyers’ questions contain the triggers that make AI say our name and do we have enough public signals to beat the default shortlist?”
Because in the AI era, being recognized is table stakes. Being named is the win.