In the LLMtel 1,000 entity study, one result should make every executive pause:
- A lot of entities are known by AI – but still never get mentioned.
In the data, 319 entities fall into a bucket we’ve been calling “Ignored”:
- Known & Not Named = recognized by at least one LLM, but never shows up in answers.
- That’s nearly one-third of the full dataset.
This companion article answers the next executive question:
- What moves an entity out of “Ignored” and into “Named”?
(In other words: what makes AI bring you up, not just recognize you?)
The hidden problem: “AI knows you exist” is not the same as “AI recommends you”Dynamic Content
The Ignored bucket proves a hard truth:
- Recognition is table stakes.
- Mention is the real prize.
The models can “know” you in the abstract and still avoid naming you when asked:
- “Who should we use?”
- “What’s the best option?”
- “Where can I buy…?”
- “Which provider is trusted?”
In the study, the question set is mostly practical and action-oriented. Many prompts start with words like How, What, Where, Who, Any. That’s “help me decide” language.
So the model is not behaving like an encyclopedia.
It is behaving like a recommendation engine under uncertainty.
And recommendation engines avoid risk.
A quick story from the Ignored bucket
One example from the study makes the gap obvious:
- “Who should we use?”
- “What’s the best option?”
- “Where can I buy…?”
- “Which provider is trusted?”
One example from the study makes the gap obvious:
The “Named” Levers: what makes an entity show up in answers
One example from the study makes the gap obvious:
- Identity gate: the model knows what you are.
- Category gate: the model knows where you fit.
- Confidence gate: the model feels safe mentioning you
Wikipedia helps with the first gate as our previous post and analysis clearly showed. But “Named” depends heavily on the other two. Below are the most reliable levers to move an entity out of Ignored.
Lever 1: Make your category unavoidable (in the exact words people ask)
Models don’t just retrieve “companies.” They retrieve companies in categories.
If your public footprint doesn’t loudly connect you to the category language users type, you’ll stay Ignored even if the model recognizes your name.
What to do:
- Create one clean sentence you repeat everywhere: “We are a [category] for [audience], solving [problem].”
- Use the words real buyers use, not internal jargon.
- Build pages that mirror real prompts:
- “Best ___ for ___”
- “___ alternatives”
- “___ pricing”
- “___ integrations”
- “___ security / compliance”
- “___ reviews”
Why it works:
When the model sees the same category associations across many sources, it becomes easier to retrieve you when the question is asked.
Lever 2: Build “adjacency” to well-known entities (comparisons, alternatives, integrations)
LLMs frequently answer by anchoring on a few “default” names. If you are not connected to those names in public text, you don’t get pulled into the answer.
What to do:
- Publish credible “alternatives to X” and “X vs Y” material (neutral tone, factual).
- Get listed in partner ecosystems (app marketplaces, integration directories).
- Ensure customers and partners use your exact name when they describe your solution.
Why it works:
Adjacency is how models learn who belongs in the same shortlist.
Lever 3: Increase third‑party validation (models trust outside voices more than you)
If your footprint is mostly self-written, the model often treats it like marketing especially in recommendation mode.
The strongest “Named” signals are third-party:
- reputable publications
- analyst coverage
- standards bodies
- serious directories
- credible reviews
- academic, government, or industry references (when relevant)
What to do:
- Aim for coverage that explains what you are, not just announcements.
- Make sure third-party pages use your canonical name and category language.
- Build a press/coverage hub that links out to independent sources (not just press releases).
Why it works:
Third-party sources reduce the model’s uncertainty. Lower uncertainty = more willingness to name you.
Lever 4: Fix naming and disambiguation (models hate messy names)
The dataset includes “unknown but named” cases entities that show up in answers even when the entity test says models don’t recognize them. That’s a warning sign: models sometimes “name-drop” based on text patterns.
You want the opposite:
- consistent, unambiguous identity
- one canonical name
- clear separation from similarly named entities
What to do:
- Standardize your name across the web (legal name, brand name, abbreviations).
- Avoid punctuation chaos and multiple variants.
- If your name is generic (e.g., “The Base”), add an always-on qualifier in public references (industry, location, parent brand).
Why it works:
If the model can’t reliably map mentions to one entity, it will often avoid mentioning you at all.
Lever 5: Give the model “safe-to-recommend” proof (reduce perceived risk)
In recommendation answers, models try to be safe. They lean toward:
- well-known brands
- widely documented providers
- options with clear proof points
If you don’t look “safe,” you don’t get named.
What to do:
- Make proof easy to find:
- certifications and compliance
- customer logos or case studies (where allowed)
- security pages that are specific, not vague
- transparent positioning (“best for X, not for Y”)
- Publish stable facts: founding, HQ, product scope, industries served.
Why it works:
The model is more likely to name entities that look established, verifiable, and low-risk.
Lever 6: Show up where models expect to “find vendors”
Many Ignored entities live on the edges of the public knowledge graph.
Beyond Wikipedia, there are “vendor surfaces” models often learn from, such as:
- industry directories
- partner marketplaces
- standards organizations
- major business databases
- major review platforms (where relevant)
- conference speaker pages and association memberships
What to do:
- Pick 5–10 authoritative surfaces in your category and get consistent listings there.
- Ensure each listing uses the same category sentence and canonical name.
Why it works:
Models learn patterns from repeated, consistent placement. Being scattered across low-quality pages doesn’t help much.
What NOT to do (because it backfires)
If your goal is long-term AI visibility, avoid tactics that pollute trust:
- fake reviews
- spammy guest posts
- low-quality “directory blasts”
- “gaming” Wikipedia (it has strict rules and the reputational risk is real)
The point is to become more verifiable, not louder.
A practical executive playbook (30–60–90 days)
Days 0–30: Diagnose and clean identity
- Confirm whether you are Unknown, Ignored, or Named in your own testing
- Lock canonical name + one category sentence
- Fix the top 20 high-visibility pages where your name appears (site, partners, profiles)
Days 31–60: Build category and adjacency
- Publish category pages that match real buyer questions
- Build 3–5 comparison/alternative assets (factual, not trashy)
- Ensure integration pages and partner listings exist and are consistent
Days 61–90: Add third-party confidence
- Target credible mentions that describe what you do
- Strengthen directory presence in authoritative places
- Expand proof assets (case studies, compliance, documentation)
Ongoing: Measure mention-rate, not just awareness
Recognition is nice. Mentions are the KPI.
The simple takeaway
Your Wikipedia article showed something important: Wikipedia multiplies recognition, even inside the Ignored bucket.
This article is the next step:
- To become Named, you need more than identity.
- You need category fit, adjacency, and confidence in public, third-party, machine-readable ways.
In an AI-first discovery world, the winners won’t just be the companies that exist online.
They’ll be the companies that AI can confidently say out loud.