AI agents running on Google Agentspace, AWS AgentCore, Snowflake Cortex, and Databricks AI/BI Genie fail for one consistent reason: they lack governed context. The Enterprise Context Layer is the infrastructure stack that closes this gap — 6 interconnected components providing AI agents with versioned, machine-readable definitions, rules, and memory drawn from your actual data estate.
This glossary defines the 10 terms every enterprise AI and data leader needs to operate and govern that stack.
| Term | Infrastructure layer | Core function |
|---|---|---|
| Active Ontology | Enterprise Context Layer | Machine-readable business concepts, updated by live metadata signals |
| Enterprise Skills | AI Control Plane | Deterministic, auditable agent workflows with built-in approval gates |
| Context Repository | Enterprise Context Layer | Version-controlled business knowledge — like Git, for AI agent context |
| Model Council | AI Control Plane | Multi-model cross-verification that surfaces consensus and conflict |
| Enterprise Memory | Enterprise Context Layer | Persistent knowledge and audit trails across agent session boundaries |
| Context Layer | AI Infrastructure | Connects data stack to agents with governed interpretation |
| Enterprise Context Layer | AI Infrastructure | Federated context infrastructure across the full organization |
| Context Graph | Enterprise Context Layer | Operational metadata with lineage, governance signals, and decision traces |
| Semantic Layer | BI Infrastructure | Standardizes metrics and dimensions for human analysts |
| Knowledge Graph | Data Infrastructure | Maps static relationships between real-world entities |
| Context Engineering | AI Discipline | Dynamically assembles the right context at inference time |
What is Active Ontology?
Permalink to “What is Active Ontology?”An Active Ontology is a continuously updated, machine-readable specification of business concepts, relationships, and constraints, enriched by live metadata signals from an organization’s data estate. It updates in milliseconds, not at quarterly documentation cycles. Unlike a static ontology curated by a small team of data architects, an active ontology is live infrastructure: a component of an AI agent’s reasoning stack that enforces business axioms and logical consistency across every interaction.

The building blocks powering continuously updated, machine-readable business intelligence. Source: Atlan.
Active Ontology is a component of the Enterprise Context Layer.
Traditional knowledge management assumes business definitions are stable. They are not. Revenue calculations change when a discount policy updates. Customer classifications shift when a segment model retrains. Static ontologies — including those built on standards like OWL or SKOS — capture how your organization understood its data six months ago. An active ontology captures how it understands its data right now.
In Atlan’s implementation, the active ontology consumes signals from connected systems: data quality scores, schema change events, data lineage updates, and governance policy modifications. When the definition of net_revenue_recognized changes in the Finance domain, every downstream agent reasoning over that term receives the governed, current definition immediately. No human manually syncs documentation. The update is immediate, auditable, and traceable to its source event.
vs. Static ontology: Static ontologies are versioned reference documents maintained by a small team and consulted during pipeline development. Active ontologies are live, queryable infrastructure — updated by metadata events and auditable by governance teams on every agent interaction.
Why it matters: Agents that hallucinate often do so because they operate on stale or ungoverned business definitions. Active ontologies close the gap between what a model was trained to know and what your organization actually means today.
What are Enterprise Skills?
Permalink to “What are Enterprise Skills?”Enterprise Skills are encapsulated, governed business workflows that AI agents invoke through protocols like the Model Context Protocol (MCP) to execute mission-critical tasks with 3 guarantees: repeatable outputs, auditable action logs, and enterprise-safe execution boundaries. A skill is a complete business operation. It includes the business logic, validation steps, approval gates, and audit trail required to perform actions like creating an invoice, processing a refund, or triggering a compliance report. The agent invokes the skill; the skill handles execution.

How Enterprise Skills ensure safe, auditable, and repeatable AI agent execution. Source: Atlan.
Enterprise Skills sit within the Enterprise Context Layer.
The problem with raw API calls in agentic workflows is probabilistic execution. When an LLM decides how to call an API, it reasons from general training — not your specific business rules. Two invocations of the same task may produce different outputs.
Consider a calculate_commission skill invoked twice in the same quarter. One call catches a mid-quarter accelerator update; the other doesn’t. Both return plausible numbers. Neither flags an error. There’s no record of which compensation policy version each used. Finance closes the quarter on mismatched figures with no audit trail. Enterprise Skills prevent this by encoding the policy version lookup inside the skill: the agent executes the skill; the skill owns the business rule.
In practice, an enterprise skill for process_customer_refund doesn’t ask the agent to determine the refund policy, calculate eligibility, or format the API payload. It handles all three and logs every invocation with a timestamp, the agent identity, and the business context that triggered it. Compliance teams can audit every action the AI agent took, at the skill level, with a complete chain of custody.
vs. Raw AI tools: Raw tools expose basic functions and require the AI to reason about execution. Enterprise Skills encapsulate complete, governed business processes — with the validation, approval gates, and audit trail required for finance, HR, and legal workflows where probabilistic execution is not acceptable.
What is a Context Repository?
Permalink to “What is a Context Repository?”A Context Repository is the enterprise governance counterpart to a Git repository, version-controlling 3 categories of agent knowledge: the business definitions, access control policies, and compliance metadata that an AI agent reads from at runtime. It is not conversational memory. It is governed infrastructure with lineage tracking and audit trails for the organizational knowledge that makes AI deployments reproducible and accountable.

Three governed knowledge layers every AI agent reads at runtime. Source: Atlan.
Context Repository forms part of the Enterprise Context Layer.
Git repositories track who changed what, when, and why — making software deployments reproducible. Context repositories provide the same guarantees for AI reasoning: which business definitions an agent was operating on at the time of a decision, which version of the governance policy was active, and who approved the current configuration. When an audit requires you to explain why an AI agent took a specific action in Q1, the context repo provides the answer.
When an audit requires you to explain a Q1 decision: the context repo records that on March 15th, the finance-forecaster agent accessed the Q4_revenue_model context object at version 3.1, approved by the CFO on March 3rd, and restricted to Finance-domain agents. That record is queryable by any authorized risk reviewer.
vs. Data repository: Data repositories store raw information. Context repositories store the versioned knowledge required to interpret that information — what it means, who owns it, what policies govern its use, and what an AI agent is permitted to do with it.
What is a Model Council?
Permalink to “What is a Model Council?”A Model Council is a multi-model validation framework in which a single query is simultaneously executed across 3 or more frontier AI models — Claude, GPT-4o, and Gemini, for example — with a synthesizer model analyzing points of consensus and conflict to produce a cross-verified, enterprise-grade output. Rather than accepting any single model’s response as authoritative, the council triangulates intelligence across models with different training distributions, reducing the hallucination and bias risk that single-model reliance introduces.
Model Council is a component of the Enterprise Context Layer, within the AI Control Plane.
In an enterprise context, the Model Council serves a specific governance function: it arbitrates between models when outputs diverge, routes queries to the model best suited for a given domain (legal, financial, technical), and maintains an audit log of which models were consulted, how they differed, and how the synthesizer resolved the conflict.
For a financial compliance query, a Model Council might run GPT-4o for regulatory language precision, Claude for reasoning quality, and Gemini for retrieval breadth. It then surfaces a synthesized answer with confidence scores and the specific divergence points a human reviewer should examine. The council isn’t about finding the “best” model — it’s about surfacing the limits of what any single model knows.
vs. Model picker: A model picker routes a query to one model based on cost or capability. A Model Council runs the same query across multiple models simultaneously, synthesizes where they agree, and surfaces where they don't — providing a structured, high-confidence output that mitigates the inherent gaps of any single model.
What is Enterprise Memory?
Permalink to “What is Enterprise Memory?”Enterprise Memory is a governed, persistent infrastructure that gives stateless large language models 3 capabilities across session boundaries: recall of prior interactions, access to organizational precedents, and lineage-tracked audit trails required for production compliance. It combines vector databases for semantic similarity retrieval with knowledge graphs for structured factual recall.
Enterprise Memory lives inside the Enterprise Context Layer.
Every enterprise AI deployment without persistent memory starts cold on every session: no knowledge of prior decisions, no access to organizational precedents, no ability to build on past reasoning. A context window is RAM — fast, erased when the session ends. Enterprise Memory is disk storage: persistent, queryable, and auditable. For customer success, legal analysis, or financial modeling workflows, this cold-start problem isn’t a UX inconvenience. It’s a disqualifying capability gap.
The governance distinction separates enterprise memory from consumer alternatives. Mem0, with a $24M raise and 52,000+ GitHub stars, provides persistent agent memory — but does not yet offer enterprise-grade, end-to-end audit trails that tie memories back to upstream data lineage and governance approvals. Microsoft Azure AI Foundry, Oracle, Letta, and Zep offer memory capabilities, but typically do not combine lineage tracking (which data informed this memory?), fine-grained access control (who can query it?), and compliance-oriented auditability (can a risk team review what this agent has learned, by version and source?) as first-class, integrated features.
What enterprise audit trails actually look like: on March 15th, the finance-forecaster agent accessed the Q4_revenue_model context object at version 3.1, approved by the CFO on March 3rd and restricted to Finance-domain agents. That record is queryable by any authorized risk reviewer. Consumer memory tools do not produce this.
vs. Context window: A context window is ephemeral "RAM" for a single session. Enterprise Memory is persistent "disk" storage: governed, auditable, and cumulative across every agent interaction in your organization's history.
What is the Context Layer?
Permalink to “What is the Context Layer?”The Context Layer is a persistent, versioned infrastructure tier connecting your data stack to your AI agents across 3 dimensions of interpretation: business semantics, operational rules, and institutional knowledge. It provides the governed interpretation layer that prevents hallucinations, business rule misapplication, and stale definitions from reaching production agent decisions.
OpenAI’s February 2026 Frontier platform introduced a “shared business context” capability connecting enterprise data so agents operate with organizational context — an independent signal of the infrastructure gap the context layer addresses.
The context layer doesn’t store raw data. It stores the meaning of data: what customer_lifetime_value means in Marketing vs. Finance, which version is currently approved for agent use, and which lineage path traces back to the source systems that populate it. This disambiguation allows multiple AI agents to collaborate on shared tasks without contradictory outputs. Each agent draws from the same governed definition — not from its own independently inferred interpretation.
vs. Semantic layer: The semantic layer standardizes business metrics and dimensions for BI tools and SQL-based analytics — it serves human analysts. The context layer provides governance, lineage, and operational rules for AI agent autonomy — it serves machines.
What is the Enterprise Context Layer?
Permalink to “What is the Enterprise Context Layer?”The Enterprise Context Layer is the full federated implementation of context infrastructure across an organization, providing AI agents with governed access to business definitions, operational rules, memory, and execution capabilities at enterprise scale. It is composed of 6 interconnected components:
- Data Graph — the queryable network of your data assets, pipelines, and schemas
- Active Ontology — continuously updated, machine-readable business definitions
- Enterprise Memory — persistent, governed knowledge across session boundaries
- Enterprise Skills — deterministic, auditable agent workflows
- Context Repositories — version-controlled domain knowledge
- AI Control Plane — tools, guardrails, evals, access controls, a model gateway, and a Model Council
The Enterprise Context Layer sits between your agent platforms (Google Agentspace, AWS AgentCore, Snowflake Cortex, Databricks Genie) and the systems those agents need to reason over: systems of record, data platforms, knowledge repositories, and semantic tools.
The “enterprise” qualifier matters. A local context implementation creates isolated islands: the Sales agent has its definition of revenue, the Finance agent has a different one, and they cannot collaborate without producing contradictory outputs. The Enterprise Context Layer federates definitions under a single governance model while preserving domain-specific nuance. Sales and Finance each maintain their revenue definition, but the layer tracks the relationship between them and surfaces the conflict when an agent tries to reconcile them.
What is the AI Control Plane?
Permalink to “What is the AI Control Plane?”The AI Control Plane is a subsystem within the Enterprise Context Layer, not a standalone market category. It manages the tools, guardrails, evals, access controls, and model routing — including the Model Council — that make agentic execution safe, predictable, and auditable.
What is a Context Graph?
Permalink to “What is a Context Graph?”A Context Graph is an operational metadata network that captures 4 dimensions absent from traditional knowledge graphs: dynamic relationships, temporal qualifiers, governance signals, and decision traces. These dimensions allow AI agents to understand not just what an entity is, but how and why it is in its current state — including its lineage, the exceptions applied to it, and the governance rules active at inference time.
Context Graph is the operational foundation of the Enterprise Context Layer.
The hierarchy of semantic infrastructure runs from taxonomy (parent-child categorization) to ontology (logical rules and constraints) to knowledge graph (rules populated with real-world entities) to context graph — which adds the operational layer all prior levels lack:
| Level | Structure | Primary function | Example |
|---|---|---|---|
| Basic | Taxonomy | Hierarchical categorization | “Mammal → Dog” |
| Intermediate | Ontology | Logical rules and axioms | “X treats Y” |
| Advanced | Knowledge Graph | Rules populated with real data | “Patient A has Disease B” |
| Strategic | Context Graph | Operational and temporal signals | “VP approved 20% exception on March 14 for campaign X” |
A knowledge graph knows that revenue relates to orders. A context graph knows that a 20% discount exception was approved by a VP on March 14, 2026, that it applies specifically to the enterprise_q1_promo campaign, and that it should not be used as a precedent without a second VP approval. Agents reasoning about revenue need the second version.
vs. Knowledge graph: Knowledge graphs map static relationships between real-world entities. Context graphs add the operational layer: live signals, temporal changes, governance decisions, and exception history. That distinction separates agents that retrieve information from agents that act on it reliably.
What is the Semantic Layer?
Permalink to “What is the Semantic Layer?”The Semantic Layer is a translation infrastructure that standardizes 3 categories of data representation — metrics, dimensions, and hierarchies — for BI tools and SQL-based analytics, enabling human analysts to query data using business terminology like MRR or active_customers without knowing the underlying database schema. It is optimized for human-driven query workflows, not machine-driven agent reasoning.
The semantic layer and context layer are complementary, not competing. A finance agent with access to a semantic layer that defines MRR as (new_ARR + expansion_ARR) / 12 will still generate a wrong board report if it doesn’t know that expansion_ARR was recalculated mid-quarter after a contract amendment, and that the pre-amendment figure is what the board approved. The semantic layer says what the formula is. The context layer tracks which version of the inputs is currently authoritative, who approved the change, and when it took effect.
vs. Context layer: A semantic layer defines what MRR means for a BI dashboard. A context layer defines what MRR means to an AI agent that must reason about revenue, route exceptions for human review, or generate a board report without hallucinating the calculation methodology. The semantic layer serves SQL tools; the context layer serves agents.
What is a Knowledge Graph?
Permalink to “What is a Knowledge Graph?”A Knowledge Graph is a structured network connecting real-world entities across 4 primary categories — people, products, locations, and concepts — through semantic relationships stored in a graph database for search, discovery, and human reasoning support. It answers: “what is this?” and “how does it relate to that?”
Knowledge graphs are the precursor to context graphs in the hierarchy of semantic infrastructure. They excel at static entity disambiguation: Google’s Knowledge Graph resolves “Mercury” to planet, element, or band based on surrounding context. But they lack the operational layer required for enterprise AI agents — they don’t capture temporal change, governance decisions, exception history, or decision traces.
Where they fall short for agents: a knowledge graph knows Customer A has Status: Premium. A context graph knows Customer A was reclassified to Premium on February 3rd following a $50K expansion deal, that the classification triggers a specific SLA, and that a quarterly review is scheduled for May 2026. Agents need the second version.
vs. Context graph: Knowledge graphs answer "what is this entity and how does it relate to others?" Context graphs answer "what is this entity, why is it in its current state, what exceptions apply, and what governance rules are active?" Agents need the latter to act reliably in production.
What is Context Engineering?
Permalink to “What is Context Engineering?”Context Engineering is the technical discipline of designing systems that dynamically assemble the right information, business rules, and memory into an AI agent’s context window at inference time. Gartner has cited context engineering as the successor to prompt engineering and projects that 80% of AI application tooling will incorporate it by 2028. Championed by Tobi Lütke and Andrej Karpathy, it treats context assembly as an engineering discipline — designed, versioned, and governed rather than hand-tuned per query.
Where prompt engineering crafts the right instruction, context engineering builds the system that supplies the right information automatically — so the instruction always has the governed context it needs to produce a reliable output.
The architecture your agents are missing
Permalink to “The architecture your agents are missing”Hallucinations, compliance gaps, and contradictory agent outputs share one root cause: no governed context. Active Ontology governs meaning. Enterprise Memory preserves intelligence across sessions. Enterprise Skills make agent actions deterministic and auditable. Context Repositories version-control domain knowledge. Model Council cross-verifies reasoning across frontier models. The Context Graph surfaces the operational signals — lineage, exceptions, decisions — that separate agents that retrieve from agents that act reliably.
Talk to an Atlan expert about the Enterprise Context Layer in production.
Share this article
