Quick facts
Permalink to “Quick facts”| Fact | Detail |
|---|---|
| First named | Context engineering popularized as a discipline in June 2025 by Tobi Lutke and Andrej Karpathy |
| Hallucination reduction | Graph-enhanced retrieval frameworks like MEGA-RAG achieve hallucination rate reductions of over 40% compared to traditional RAG approaches |
| Gartner position | Knowledge graphs sit on the “Slope of Enlightenment” in the Gartner’s 2025 Hype Cycle for AI (Gartner 2025) |
| Market share | Context-rich knowledge graphs account for an estimated 60% of semantic knowledge graphing market revenue (OpenPR market analysis, 2026) |
| Enterprise AI adoption | 95%+ of enterprises will use GenAI APIs or models by 2028 (Gartner’s 2025 Hype Cycle for AI) |
| Governance gap | Only 21% of companies have mature agent governance frameworks (Deloitte State of AI 2026) |
| Standard protocol | Model Context Protocol (MCP) is a Linux Foundation project, supported by AWS, Anthropic, Google, Microsoft, and OpenAI |
Knowledge graphs vs context graphs: side-by-side comparison
Permalink to “Knowledge graphs vs context graphs: side-by-side comparison”| Dimension | Knowledge Graph | Context Graph |
|---|---|---|
| Core purpose | Define semantic relationships and business concepts | Capture operational intelligence and decision traces for AI |
| Primary focus | “What things are” - entities, taxonomies, ontologies | “How things work” - lineage, policies, precedents, temporal context |
| Relationship types | Conceptual: “Customer places Order”, “Product belongs to Category” | Operational: “Pipeline transforms Table”, “Policy governs Asset”, “Decision approved by User” |
| Temporal awareness | Static or slowly changing relationships | Time-travel queries, validity periods, transaction timestamps, historical evolution |
| Data structure | Nodes (entities), Edges (semantic relationships), Properties (attributes) | Nodes (entities + policies + decisions), Edges (semantic + operational), Metadata (confidence, provenance, quality) |
| Query patterns | SPARQL for triple stores, Cypher for property graphs - semantic traversal | Graph queries + operational filters - “Find assets where quality > 95% AND certified = true AND modified in last 30 days” |
| AI integration | Provides structured knowledge for semantic understanding | Engineered for LLM consumption: token efficiency, relevance ranking, hallucination reduction |
| Governance approach | External documentation, separate policy systems | Policies as first-class graph nodes, access controls embedded in structure |
| Update frequency | Periodic updates, manual curation cycles | Continuous active metadata collection from live systems |
| Typical use cases | Business glossaries, product catalogs, medical ontologies, semantic search | AI agent decision-making, impact analysis, compliance auditing, agentic workflows |
| Example platforms | Neo4j, Stardog, GraphDB, Amazon Neptune | Atlan (context layer), Glean (enterprise context), context-aware data catalogs |
| Best for | Defining consistent business vocabulary across human users | Enabling autonomous AI systems with full operational context |
| Limitation | Lacks operational metadata AI needs for trustworthy decisions | Requires more complex infrastructure and continuous metadata collection |
What context graphs add beyond knowledge graphs
Permalink to “What context graphs add beyond knowledge graphs”Knowledge graphs excel at representing “what is” through semantic relationships. Context graphs extend this foundation by answering “how” and “why” questions that AI systems need to operate reliably.
1. Decision traces and precedent links
Permalink to “1. Decision traces and precedent links”Context graphs capture approvals, exceptions, and replayable workflows that make the full decision path explicit. They preserve not just outcomes, but how decisions were made, under what constraints, and by whom. When an AI agent encounters an edge case, these traces become searchable precedent — the system reuses prior resolutions instead of forcing teams to repeatedly re-learn the same exceptions.
Modern context layer for AI implementations treat decision history as traversable graph elements, not entries relegated to external audit logs or detached workflow systems.
2. Temporal qualifiers and time-travel support
Permalink to “2. Temporal qualifiers and time-travel support”Static knowledge graphs represent relationships as they exist now. Context graphs incorporate validity periods, transaction timestamps, and native time-travel query capabilities. This temporal structure allows systems to reason beyond current state — understanding how entities, permissions, and relationships evolved over time.
For AI agents, past states can be queried directly, transitions analyzed, and incorrect conclusions caused by flattened, present-only truth avoided.
A temporal query in practice returns a full change timeline, not just current state.
Query: “Show all governance policy changes affecting finance.revenue_actuals in the last 90 days.”
The context graph returns a timeline:
- Day -87:
data-quality-thresholdupdated from 95% to 98% by DataOps team - Day -62:
access-policy-12modified to grant read access to a new AI forecasting agent - Day -34:
retention-policy-financeextended from 7 years to 10 years per new regulatory requirement - Day -12:
classificationchanged frominternaltorestrictedafter audit finding
Each entry includes who made the change, why (linked to the approval or policy trigger), and which downstream assets were affected. A data lineage query in a knowledge graph returns current state only. It cannot answer “what changed” or “who approved the change” because it has no native concept of state over time.
3. Provenance and confidence scoring
Permalink to “3. Provenance and confidence scoring”Context graphs treat source attribution and quality signals as embedded metadata, not optional annotations bolted on after the fact. Every relationship carries confidence scores, verification timestamps, and provenance chains — so AI systems can reason about reliability directly. Verified facts, inferred relationships, and low-confidence signals remain distinguishable.
4. Policy nodes as graph elements
Permalink to “4. Policy nodes as graph elements”Instead of storing governance rules in external documentation, context graphs represent policies as nodes connected through typed relationships. Access controls, data classification rules, and data governance requirements become part of the graph structure itself.
What governance-as-queryable-nodes looks like in practice, across three scenarios:
PII classification propagation Query: “Show all downstream tables affected by the PII classification on customer_email in raw.customers.” Traversal: customer_email --classified-as--> PII --propagates-to--> 14 downstream tables --governed-by--> GDPR-retention-policy Result: The agent identifies all 14 tables that inherit the PII classification and the specific GDPR retention policy that applies to each.
Access control enforcement Query: “Can the marketing analytics agent access revenue_by_segment in the finance warehouse?” Traversal: marketing-analytics-agent --has-role--> marketing-viewer --permitted-by--> access-policy-7 --grants-access-to--> [marketing datasets only] Result: Access denied. The graph enforces the boundary structurally. No post-hoc check needed.
Compliance audit trail Query: “Who approved the exception allowing the ML training pipeline to use customer behavioral data?” Traversal: ml-pipeline-v3 --uses--> behavioral_events --exception-granted-by--> DataGovernanceBoard --on-date--> 2025-11-14 --valid-until--> 2026-05-14 --condition--> anonymization-applied Result: Full decision trace with approval authority, validity window, and conditions.

Five key differences between knowledge graphs and context graphs for AI-ready data. Image by Atlan.
What are common misconceptions about context graphs?
Permalink to “What are common misconceptions about context graphs?”Context graphs face a reasonable critique: they look like knowledge graphs with more metadata bolted on. If the only difference is more edges and node types, the term is marketing, not architecture. That skepticism deserves a direct answer.
The structural distinction comes down to three properties that knowledge graphs were never designed to support.
1. Governance lives in the graph, not next to it. In a knowledge graph, policies exist as external documentation that humans reference. In a context graph, policies are queryable nodes. An AI agent traverses a path like Dataset --governed-by--> RetentionPolicy --requires--> DataClassification and enforces it during execution. No human in the loop, no post-hoc check. Only 21% of companies have mature agent governance frameworks (Deloitte State of AI 2026), which makes this structural difference operational, not theoretical.
2. Temporal queries are native, not retrofitted. Knowledge graphs can add temporal extensions. Context graphs treat time-travel queries as core graph operations. Querying “What was the classification of this dataset on January 15?” is a single traversal, not a join against a separate temporal store.
3. Decision traces are graph paths, not log entries. Approvals, exceptions, and precedent links are traversable relationships in the graph itself. The audit trail is the graph. You query it with the same tools you use to query lineage or classification.
Context graphs build on knowledge graph foundations. They are an evolution, not a replacement. Organizations already invested in knowledge graph structures should treat context graphs as an extension layer. The skepticism that pushes vendors to show structural differences rather than just assert them is doing its job.
Who builds the context graph? Platform vs. application
Permalink to “Who builds the context graph? Platform vs. application”The question of ownership sparked significant debate in early 2026. While vertical agent startups operating in execution paths see decisions deeply, most enterprise decisions pull context from 6-10+ systems simultaneously. This enterprise heterogeneity suggests context graphs may be fundamentally a platform problem rather than an application one, since every enterprise runs different system combinations.
How context graphs reduce AI hallucinations through structured context
Permalink to “How context graphs reduce AI hallucinations through structured context”AI hallucinations occur when models generate plausible but factually incorrect responses. Context graphs address this challenge through multiple mechanisms:
1. Graph-grounded retrieval with operational filters
Permalink to “1. Graph-grounded retrieval with operational filters”When AI systems query context graphs, they retrieve more than semantic relationships. They pull full operational context, including data lineage, governing policies, quality signals, and ownership metadata. Knowledge graph-enhanced RAG systems achieve strong accuracy rates in specialized domains, with context graphs providing additional improvements through operational guardrails.
2. Token-efficient context engineering
Permalink to “2. Token-efficient context engineering”LLMs operate under strict token limits, making context selection as important as context quality. Context graphs optimize information delivery through relevance ranking, confidence-based filtering, and hierarchical summarization that adjusts detail based on query complexity.
3. Reasoning chains with explainable paths
Permalink to “3. Reasoning chains with explainable paths”Unlike vector similarity search, context graph retrieval follows explicit, typed relationships. Each response can be traced back through the graph to the specific entities, relationships, and policies that informed it.

How context graphs ground AI agents in operational reality to prevent hallucinations. Image by Atlan.
How a single query avoids hallucination: step by step
Permalink to “How a single query avoids hallucination: step by step”Consider what happens when an AI agent receives the question: “What was our Q4 revenue?”
-
The agent receives a query it cannot safely answer alone. “Q4” is ambiguous. Multiple revenue tables exist. The model has no way to pick the right quarter definition or the right table without external context.
-
The context graph resolves the ambiguity. The graph contains the organization’s fiscal calendar definition node. It maps “Q4” to fiscal Q4 (October through December). Without this, the model defaults to calendar Q4 and returns the wrong number.
-
The graph identifies the authoritative source. It finds
finance.revenue_actuals, certified by the CFO office with a 99% confidence score. Three other uncertified revenue tables containing partial or draft data are filtered out. -
The graph enforces access control. It verifies the requesting agent holds a role with permission to read the certified table through its access policy node.
-
The graph attaches provenance. The response includes lineage from the ERP system through the finance pipeline, the last refresh timestamp, and the data quality score.
-
The agent returns a grounded answer. The correct fiscal Q4 figure, from the certified source, with full provenance. Atlan’s internal testing found that structured context improved accuracy by up to 5x compared to schema-only setups. Without the context graph, this query fails at step 2 (wrong quarter) or step 3 (wrong table) or both.
When to use knowledge graphs vs context graphs
Permalink to “When to use knowledge graphs vs context graphs”Knowledge graphs and data catalog platforms powered by context graphs are complementary, not interchangeable.
Use knowledge graphs for semantic understanding
Permalink to “Use knowledge graphs for semantic understanding”Knowledge graphs are strongest when the goal is meaning and consistency. They suit:
- Defining domain ontologies and business vocabularies
- Creating taxonomies and conceptual relationships
- Enabling semantic search across structured and unstructured content
This model aligns naturally with BI-era workflows where humans consume data through dashboards, reports, and ad hoc analysis.
Use context graphs for AI-native operations
Permalink to “Use context graphs for AI-native operations”Context graphs become essential when systems must act, not just explain. They are required when:
- AI agents operate autonomously in real workflows
- Decisions depend on precedent, approvals, and exceptions
- Data governance policies must be enforced programmatically
- Temporal context changes interpretation
- Explainability requires traceable reasoning paths
Combine both for AI-ready data operations
Permalink to “Combine both for AI-ready data operations”Leading organizations layer them. Knowledge graphs define what things mean; context graphs encode how decisions are made and enforced.
How do context graph and knowledge graph architectures differ?
Permalink to “How do context graph and knowledge graph architectures differ?”Knowledge graph architecture vs context graph architecture
Permalink to “Knowledge graph architecture vs context graph architecture”| Dimension | Knowledge Graphs | Context Graphs |
|---|---|---|
| Core foundation | Built on RDF triple stores or property graphs (e.g., Neo4j) | Built on graph databases extended for operational and AI context |
| Modeling approach | Ontology-driven, using OWL or RDFS for formal semantic definitions | Semantically enriched, combining graph structure with active metadata |
| Query paradigm | SPARQL or Cypher for semantic traversal and inference | Graph traversal with operational and policy-aware filters |
| Inference model | Rule-based inference engines derive implicit relationships | Precedent-based reasoning using decisions, lineage, and temporal context |
| Relationship dynamics | Primarily static, conceptual relationships | Continuously evolving relationships driven by real system activity |
| Temporal support | Limited or external to the graph | Native time-travel queries, validity windows, historical state |
Knowledge graph implementation vs context graph implementation
Permalink to “Knowledge graph implementation vs context graph implementation”| Dimension | Knowledge Graphs | Context Graphs |
|---|---|---|
| Storage and compute | Graph-native storage, often tightly coupled to query workloads | Graph-native storage with separation of storage and compute for scale |
| Metadata collection | Batch ingestion and manual curation | Continuous ingestion from queries, pipelines, orchestration, and users |
| Enrichment strategy | Schema-first, ontology-aligned enrichment | Selective enrichment based on signal value and operational churn |
| Retrieval optimization | Optimized for semantic correctness | Optimized for LLM consumption: relevance ranking, confidence filtering, token efficiency |
| Explainability | Ontology-based reasoning | Traceable reasoning paths across data, policies, and decisions |
| Integration surface | BI tools, search, semantic layers | Governance systems, orchestration platforms, quality tools, AI agents |
| AI readiness | Requires additional layers for agent use | Designed for direct AI and agent integration, including MCP support |
Where do context graphs and knowledge graphs deliver the most value?
Permalink to “Where do context graphs and knowledge graphs deliver the most value?”Knowledge graphs for semantic understanding
Permalink to “Knowledge graphs for semantic understanding”Healthcare Research: Medical institutions use knowledge graphs to connect diseases, symptoms, treatments, and research findings. These graphs capture relationships like “Disease X is treated by Drug Y.”
Retail Product Catalogs: E-commerce platforms use knowledge graphs to model products, categories, and attributes, enabling consistent navigation of large catalogs.
Context graphs for AI-native operations
Permalink to “Context graphs for AI-native operations”Financial Services Compliance: Banks use context graphs to encode regulations, approval workflows, and decision precedent, enabling searchable decision lineage and simplified audits.
AI-Powered Customer Support: Support teams combine product knowledge with operational context such as ticket history, policy changes, and past exceptions, allowing AI agents to handle escalations by referencing real resolution precedent.
How modern platforms combine knowledge and context graphs
Permalink to “How modern platforms combine knowledge and context graphs”Unified metadata architecture
Permalink to “Unified metadata architecture”Semantic definitions, operational signals, and temporal context live in a single graph-backed metadata layer, removing silos between glossaries, lineage, quality metrics, and data governance.
Dynamic context assembly for AI
Permalink to “Dynamic context assembly for AI”Rather than serving static metadata, platforms assemble task-specific context, ensuring AI agents receive business definitions, lineage, quality signals, usage patterns, and policy constraints in one response.
Active metadata enrichment
Permalink to “Active metadata enrichment”The graph continuously evolves through active metadata collection: automated classification, lineage propagation, quality monitoring, and user feedback, keeping semantic models aligned with real system behavior.
Governance-aware context serving
Permalink to “Governance-aware context serving”Access controls and usage policies are enforced at query time, ensuring AI systems only see and act on context they are permitted to use.
AI-native integration
Permalink to “AI-native integration”Context is exposed through standard AI interfaces such as MCP, allowing agents to retrieve grounded, structured metadata on demand.
How are enterprises using context graphs today?
Permalink to “How are enterprises using context graphs today?”Workday builds AI-ready semantic layers with Atlan's context infrastructure
"As part of Atlan's AI Labs, we're co-building the semantic layers that AI needs... All of the work that we did to get to a shared language amongst people at Workday can be leveraged by AI via Atlan's MCP server."
Joe DosSantos, Vice President of Enterprise Data & Analytics
Workday
Learn how Workday turned context into culture
Watch Now →Nasdaq powers AI governance with unified metadata context
"The implementation of Atlan has also led to a common understanding of data across Nasdaq... This is like having Google for our data."
Michael Weiss, Product Manager
Nasdaq
Learn how Nasdaq cut data discovery time by one-third
Watch Now →How knowledge graphs and context graphs work together
Permalink to “How knowledge graphs and context graphs work together”Knowledge graphs provide semantic understanding. Context graphs extend them with the operational intelligence AI systems need to act reliably. Together, they form the foundation for AI-ready data systems where meaning, history, policy, and provenance coexist as infrastructure rather than documentation.
See how Atlan's context layer brings knowledge graphs and context graphs together.
Book a Demo →FAQs about context graphs vs knowledge graphs
Permalink to “FAQs about context graphs vs knowledge graphs”What is the main difference between a context graph and a knowledge graph?
Permalink to “What is the main difference between a context graph and a knowledge graph?”Knowledge graphs represent semantic relationships between entities to define “what things are.” Context graphs extend knowledge graph foundations by adding operational metadata like lineage, decision traces, temporal context, and governance policies to explain “how things work” and “why decisions were made.”
Can context graphs work with existing knowledge graph implementations?
Permalink to “Can context graphs work with existing knowledge graph implementations?”Yes. Context graphs typically build on knowledge graph foundations rather than replacing them. Modern data catalog platforms layer operational metadata onto existing semantic structures, enriching knowledge graphs with active signals from data systems, governance workflows, and user interactions.
How do context graphs improve RAG (Retrieval-Augmented Generation) applications?
Permalink to “How do context graphs improve RAG (Retrieval-Augmented Generation) applications?”Context graphs improve RAG by providing structured operational context alongside semantic relationships. When LLMs retrieve information, they get not just definitions but quality scores indicating reliability, lineage showing data provenance, policies determining appropriate use, and temporal context showing how information evolved.
What’s the relationship between context graphs and semantic layers?
Permalink to “What’s the relationship between context graphs and semantic layers?”Semantic layers provide business definitions and metric logic that translate technical schemas into concepts humans understand. Context graphs extend semantic layers by adding operational intelligence, including quality metrics, lineage, governance policies, and usage patterns, that help both humans and AI systems understand how data actually behaves in production.
Do I need different tools for knowledge graphs versus context graphs?
Permalink to “Do I need different tools for knowledge graphs versus context graphs?”Modern data platforms increasingly support both capabilities through unified architectures. Graph databases traditionally powering knowledge graph use cases are being extended with active metadata collection, temporal storage, and policy enforcement to support context graph use cases.
How do context graphs support AI governance and compliance?
Permalink to “How do context graphs support AI governance and compliance?”Context graphs treat governance policies as queryable graph elements rather than external documentation. Access controls, data classification rules, and compliance requirements become nodes and relationships in the graph structure itself, with policies structurally enforced through the graph rather than applied as afterthought checks.
Share this article
