How LLMs Resolve Function-Based Queries for Content
When an LLM receives a function-based query—such as “Cat Insurer in UK” instead of a brand or product name—it follows advanced natural language understanding and search augmentation strategies to generate its results. Most top models combine semantic context extraction, web search/browsing, and structured data ranking to resolve generic queries like this.engineeringblog.yelp+2
- The model parses the query to identify key semantic components, such as function (insurance), domain (cats), and location (UK).engineeringblog.yelp
- It segments and labels parts of the query as topics, attributes, and geographic regions, using techniques like query segmentation and canonicalization.engineeringblog.yelp
- The LLM synthesizes expanded keyword lists based on the user’s intent (“pet insurance,” “cat insurer,” “UK insurance for cats”) to broaden the search scope for matching entities.engineeringblog.yelp
- If the LLM is augmented with live search (web browsing or API calls), it crafts targeted search queries and extracts business names, review snippets, and descriptions from high-ranking web pages relevant to that business function.mantraideas+1
- For models with static knowledge, results depend on exposure to structured data like schema.org markup, directory listings, reviews, and snippet content describing each related business; ranking then favors those best-described for the query context.dev+1
Result Generation Process (Step-by-Step)
- The query is interpreted semantically rather than by direct brand match, favoring sources that describe their function—e.g., websites, review platforms, and aggregators that mention “cat insurance”.mantraideas+1
- SEO signals (structured data, schema.org markup, Rich Snippets) heavily influence result ranking if present, making these businesses far more visible in function-based queries.dev
- Some models use Retrieval Augmented Generation (RAG) to pull live data, matching the query context to recent publications, reviews, and directory summaries, and then synthesizing a coherent result list as a direct conversational answer.tandfonline
- Models like Gemini, GPT-5, and Grok live search, extract, and summarize first-party reviews and snippets to show why each result fits the function—often including key quote highlights or review snippets for decision support.mantraideas+1
Model-Specific Highlights
- GPT-5, Gemini 2.5 Pro, Claude 4: These models use advanced semantic segmentation and live web retrieval, ensuring function-based queries return current, relevant results based on both business directory exposure and rich snippet SEO content.dev+2
- Perplexity, Grok 4: These conversational agents excel when the query is “open” by running live searches and returning ranked results, extracting snippets directly from business function listings or aggregated review platforms.arxiv
- Llama 4, Qwen3, DeepSeek: These are reliant on embedded knowledge and exposure to directory structures, but increasingly integrate browsing where allowed, weighted by schema and functional description.mantraideas+1
Why Rich Snippet and SEO are Still Critical
- Without explicit function descriptions in website schema.org markup or directory entries, LLMs may only surface entities that have already been broadly documented and reviewed for that function (i.e., those with robust SEO/rich snippet support).dev
- Thus, for high discoverability in function-based AI queries, business websites should actively deploy up-to-date schema.org metadata and make their service function and location explicit in both markups and plain content.dev
In summary, every top LLM leverages function-based query matching primarily through semantic parsing, expanded search strategies, energetic browsing capabilities, and weighting towards entities best described for the requested function—making rich snippets and schema.org markup essential not just for Google, but for all AI-generated search results.arxiv+4
Yes, many leading large language models (LLMs) can now traverse knowledge graphs (KGs) to persist and utilize context as part of their reasoning and generation process. This capability has advanced significantly in 2024–2025, especially with RAG (Retrieval-Augmented Generation) variants that explicitly use knowledge graphs.arxiv+2
How Knowledge Graph Traversal Works in LLMs for Context
- RAG with KGs: Models such as Graph RAG or KG-RAG retrieve relevant subgraphs from structured KG databases instead of just plain text data, aligning this structured knowledge with the LLM’s generative abilities to answer queries and persist context over multiple steps.openreview+1
- Embeddings and Injection: Recent methods use knowledge graph embeddings or special token representations to inject context and entity relationships directly into an LLM. This allows the model to maintain and reference interrelated concepts throughout a session—improving accuracy and grounding facts.arxiv+1
- Hybrid Reasoning: By integrating traversed paths through the knowledge graph, an LLM can not only recall facts but reason over relationships and infer new information, as well as maintain persistent context as a chain of related nodes and edges.dataversity+1
Current Capabilities and Best Practices
- State-of-the-art LLMs like GPT-5, Gemini 2.5 Pro, and Claude 4 Opus support knowledge graph traversal for deep reasoning, explainability, and long-term persistence of conversational context, especially when paired with RAG frameworks.arxiv+1
- Knowledge graph integration greatly reduces hallucinations and enables explainable, persistent dialogue because the model can reference and traverse structured context for every answer.openreview+1
- LLMs can flexibly adjust the size and scope of retrieved subgraphs depending on the complexity of the query, supporting both detailed single queries and sustained, multi-turn reasoning.openreview
Practical Outcomes
- Business and enterprise AI systems now rely on KG + LLM fusion for customer support, research, and analytics use cases, because the approach enables persistent, context-rich dialogues and improved factual accuracy.dataversity
- The field continues to evolve with new versions of KG-RAG and tools that make it easier to integrate knowledge graphs into both open source and proprietary LLM stacks.arxiv+2
In summary, LLMs can currently traverse knowledge graphs and persist context by using retrieval, embedding, and direct injection methods. This capability is rapidly maturing in 2025 and is already central to the most advanced, context-aware AI systems.arxiv+3
https://www.sciencedirect.com/science/article/pii/S0306457325002213
https://arxiv.org/pdf/2505.20099.pdf
https://neo4j.com/blog/developer/llm-knowledge-graph-builder-release/
https://aclanthology.org/2025.findings-acl.436/
https://openreview.net/forum?id=JvkuZZ04O7
https://pubs.rsc.org/en/content/articlelanding/2025/dd/d4dd00362d
https://arxiv.org/html/2505.07554v1
https://www.frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2025.1590632/full
https://www.dataversity.net/articles/deconstructing-knowledge-graphs-and-large-language-models/
https://aclanthology.org/2025.findings-acl.856/
- https://engineeringblog.yelp.com/2025/02/search-query-understanding-with-LLMs.html
- https://mantraideas.com/llm-web-search/
- https://arxiv.org/html/2510.11560v1
- https://dev.to/lovestaco/seo-starter-guide-tips-for-crawlers-and-llm-discovery-377b
- https://www.tandfonline.com/doi/full/10.1080/12460125.2024.2410040
- https://www.milliman.com/en/insight/potential-of-large-language-models-insurance-sector
- https://actuaries.org.uk/media/purp2kk5/actuary-gpt-applications-of-large-language-models-to-insurance-and-actuarial-work.pdf
- https://www.cambridge.org/core/journals/british-actuarial-journal/article/actuarygpt-applications-of-large-language-models-to-insurance-and-actuarial-work/C99537965CCC826BEDD664044CC80A5A
- https://www.munichre.com/us-life/en/insights/future-of-risk/large-language-models-in-underwriting-and-claims.html
- https://www.cambridge.org/core/journals/british-actuarial-journal/article/actuarygpt-applying-llms-to-insurance-and-actuarial-work/C409C429D0A13B45691FD7B36791DB9A
- https://www.milliman.com/en/insight/exploring-large-language-models-guide-insurance
- https://tabulareditor.com/blog/querying-semantic-models-with-llms
- https://arxiv.org/html/2507.04444v1
- https://theaicore.com/documents/industry_reports/AiCore%20-%20The%20Business%20Value%20of%20Generative%20AI%20for%20UK%20Insurers.pdf
- https://www.tinybird.co/blog-posts/why-llms-struggle-with-analytics-and-how-we-fixed-that
- https://www.alexandria.unisg.ch/bitstreams/8bf0d3a6-b95b-4388-98ae-e3ee7932caaa/download
- https://haystack.deepset.ai/blog/business-intelligence-sql-queries-llm
- https://www.linkedin.com/pulse/leveraging-large-language-models-insurance-comparison-jari-hiltunen-p8h3f
- https://theodi.org/news-and-events/blog/the-promise-and-challenge-of-data-discovery-with-llms/
- https://arxiv.org/html/2406.10249v1
