Gartner Validated the Context Layer. Atlan Has Been Building It.

TD Sarma profile picture
Director of Analyst Relations
Updated:04/06/2026
|
Published:04/06/2026
8 min read

Key takeaways

  • Only 20% of organizations report significant value from GenAI — the missing piece is a dedicated context layer
  • Gartner's three components describe what a context layer contains, not how to build one that improves over time
  • Four operational components make context layers work in practice — cataloging, curation, engineering, and retrieval
  • Context quality compounds — start from your existing data graph, not a two-year ontology project

Enterprises across industries are reaching the same inflection point: agents are shipped, they mostly work, and yet nobody uses them. The blocker isn’t a weak model or an overly ambitious use case. It’s that agents don’t know enough about how the business works to be trusted.

This is the context problem — and Gartner formalized it at the 2026 Data and Analytics Summit. Their research finds that despite significant investments in AI over the past five years, only 20% of organizations report significant value from GenAI tools. The cause? Lack of a context layer.

This echoes what we’ve been saying for over a year: without context, agents hallucinate, contradict themselves, and produce outputs that can’t be trusted — especially at enterprise scale.

Gartner proposes a three-component framework for addressing this: semantics, operational state, and provenance. But that begs the question: how do those three components behave when you build them in production? In reality, they work not as parallel workstreams, but as a compounding flywheel. That distinction changes how you build, and whether what you build stays useful.


What the context layer is (and isn’t)

Permalink to “What the context layer is (and isn’t)”

The context layer is a persistent, versioned, portable layer of enterprise knowledge — built from existing business systems — that AI agents query at runtime. It sits between your data infrastructure and your agent infrastructure, filling the knowledge gap between data systems and agents.

Atlan context layer architecture — Interfaces & Agents on top, Enterprise Context Layer in the middle, Business Systems at the bottom

What the context layer is not: a data catalog, a semantic layer, a vector database, or a one-time project.

What it is: living infrastructure that compounds over time, open and interoperable, governed through automation rather than manual review. It’s dynamic, versioned, portable, and accessible by every agent in your stack.


Gartner’s context taxonomy

Permalink to “Gartner’s context taxonomy”

Gartner analyst Afraz Jaffri describes the context layer as three components:

  1. Semantics: ontologies, business glossaries, knowledge graphs
  2. Operational state: right-time access to entities, processes, and current conditions
  3. Provenance: tracking data, decisions, actions, and outcomes across an agent’s lifecycle

Gartner context layer taxonomy — three components: Semantics, Operational State, and Provenance

Put simply: internal meaning, current reality, and past lineage.

The data backing this taxonomy clarifies the problem: organizations that implement semantic modeling are 2.2 times more likely to support AI with effective data engineering practices. Still, only 40% have taken that step.

The framework describes what the context layer contains, not how to build one that stays useful over time. Semantics, operational state, and provenance are components to assemble a context layer — but treating them as parallel engineering workstreams creates a static snapshot of enterprise knowledge that doesn’t improve with time. That’s where four operational components become critical.


Four operational components of a context layer

Permalink to “Four operational components of a context layer”

Knowing what the context layer contains is important, but building a working one requires understanding what you do with it. Those are different problems, and conflating them will stall context layer projects.

There are four operational components of a working context layer:

Context cataloging covers the context itself and its versioning — how and why context has changed over time. This is the foundation: knowing what enterprise knowledge you have, where it came from, and how it has evolved. Without versioning, you can’t trust the context, and you can’t trace why an agent made a particular decision.

Context curation is the process of building, refining, and certifying a company’s specific context: business logic, tribal knowledge, metrics definitions, policies. This is the human-on-the-loop work — not manual metadata entry, but AI surfacing conflicts and decisions that only humans can resolve. When finance and sales define “revenue” differently at the code level, that’s a curation problem. A metrics conflict agent reads all SQL, surfaces the discrepancy to the right people, and one human makes one call that updates context everywhere.

Context engineering improves context over time so that agents reach the accuracy threshold where people trust them. In our experience with customers, the bar to clear is roughly 70% accuracy for most enterprise use cases. This is the flywheel: simulation before deployment (can I ship this?), real usage feedback after (what questions revealed gaps?), and iteration (what can be improved?). Context engineers optimize these feedback loops so that knowledge and quality compound over time.

Context retrieval is how agents pull exactly what they need at runtime — via MCP, SDK, or SQL, depending on the agent framework. This is the portability layer: the same enterprise context powering Cortex, Agent Space, Sierra, and a LangGraph workflow, without re-engineering context for each platform.

The compounding flywheel — Gartner's three structural components (Semantics, Operational State, Provenance) feeding into Atlan's four operational components (Context Cataloging, Curation, Engineering, Retrieval)

These four components are sequential and dependent. You can’t curate what you haven’t cataloged, you can’t engineer accuracy without a curation workflow, and retrieval is only as good as the context it’s retrieving. This is the flywheel that makes Gartner’s three-component framework practical.


Where theory meets practice

Permalink to “Where theory meets practice”

Gartner’s taxonomy was developed from research and client inquiry data. Atlan’s perspective was developed from building context layers alongside real customers. In AI Labs, establishing a context layer improved AI analysts’ answers by 5x, and Atlan’s own internal research found that enhanced metadata increased AI SQL accuracy by 38%. This is clear, quantifiable proof of the impact that context has on AI reliability.

Our two perspectives are complementary — and where they differ, it’s because practice reveals nuances that a framework can’t fully anticipate.

On semantics: the ontology is a destination, not a starting point. Gartner identifies semantic modeling as foundational, and we agree. Production introduces the sequencing. Formal ontology projects — SHACL rules, top-down knowledge graphs — are the right destination, but rarely the right entry point. AI can bootstrap a usable ontology from column lineage and SQL in days. Humans then refine something 60% accurate, rather than starting from a blank slate. The ontology emerges from usage rather than preceding it.

On operational state: historical signals matter as much as real-time ones. Gartner emphasizes right-time data — event-driven analytics and real-time access. But the operational state also includes historical usage signals: what SQL queries have been run against a table, which BI dashboards reference a metric, who accessed a dataset and what they did with it. These patterns are often richer signals than real-time streams alone, and they’re how AI learns what data means in practice rather than in documentation.

On provenance: tracking the past and learning from it are different capabilities. Gartner rightly frames provenance as traceability and auditability. In production, provenance is also the mechanism through which context improves over time. Enterprise memory — accumulated learning from agent interactions, feedback, and corrections — is what makes the tenth agent dramatically better than the first. Building provenance only for compliance captures only half its value.


A note on “context graph” terminology

Permalink to “A note on “context graph” terminology”

Gartner cautions against “context graph” as ambiguous terminology, noting it typically refers to the provenance component rather than the full context layer. The caution is well-taken.

At Atlan, “context graph” refers to the structured representation of enterprise data assets and their relationships: lineage, SQL history, BI semantics, classification, quality signals, business glossary. That’s closer to Gartner’s semantics plus operational state, bound into a traversable knowledge structure — not just provenance.

Executives evaluating solutions should ask vendors for their specific definition. Lineage tracking, an ontology, and a full knowledge graph are different things with different implementation requirements.


The checklist for data leaders

Permalink to “The checklist for data leaders”

Gartner’s recommended actions are sound: start with high-value use cases, adopt MCP for agent access, and ensure data governance foundations are in place. Three additions for teams that are building context layers:

Start from your data graph, not a design document. Column lineage, SQL query history, and BI semantic definitions aren’t a starting point to organize — they’re context that already exists and that AI can activate. Two years from now, you want a context layer that’s been compounding from real usage, not one designed from first principles and about to meet reality for the first time.

Treat the accuracy threshold as your first milestone. Context layer maturity isn’t measured by ontology completeness. It’s measured by whether your agents perform above the threshold where people trust them enough to use them. Simulation — generating likely agent questions and scoring context before deployment — is how you verify this before you ship.

Build for portability from day one. You will not run the same agents in three years. Models will change, frameworks will change. The only durable investment is the context layer itself — and only if it exposes context through open interfaces any agent framework can consume. Engineering context into a single vendor’s proprietary stack is the new version of the lock-in problem enterprises spent a decade undoing after the first wave of BI tools.


The decision that can’t wait

Permalink to “The decision that can’t wait”

Gartner predicts that by 2027, organizations that prioritize semantics in AI-ready data will increase agentic AI accuracy by up to 80% and reduce costs by up to 60%. The exact figures may prove optimistic or conservative — but the trend line is undeniable.

The decision executives are making right now isn’t whether to build a context layer. It’s how to build one that doesn’t stall.

Organizations that treat this as a three-workstream engineering project will spend 18 months building something static, then wonder why their agents still don’t perform. But those that treat it as a compounding system — starting from their existing data graph, bootstrapping semantics with AI, and investing in the memory loop that makes every agent interaction better than the last — will be on their third iteration of agents by the time others are still designing their ontologies.

As Gartner states, the context layer is foundational for AI success. But how you build it determines whether it compounds or stagnates.

Build the flywheel.

Share this article

signoff-panel-logo

Atlan is the context layer for AI — the active metadata platform that gives AI agents the context they need to work with enterprise data reliably, at scale.

Recently Published

View all

Automatically updated with the latest published content

 

Everyone's talking about the context layer. We're the first to build one, live. April 29, 11 AM ET · Save Your Spot →

Bridge the context gap.
Ship AI that works.

[Website env: production]