AI agents use four types of memory drawn from cognitive science — in-context (working) memory, episodic memory, semantic memory, and procedural memory — formalised for LLMs in the CoALA framework (Princeton, arXiv:2309.02427). Each type stores a different class of information: the live context window, past events, factual knowledge, and behavioural rules respectively. Enterprise data agents running against live data estates require a fifth type the standard taxonomy omits: organisational context memory.
Quick Facts Table
| Memory Type | What It Stores | Where It Lives | Example Tool |
|---|---|---|---|
| In-Context (Working) | Active session: system prompt, messages, tool outputs | Context window (LLM directly) | All LLMs; Letta Core Memory |
| Episodic | Records of past events and interactions | External DB, retrieved on demand | Letta Recall Memory, Mem0, LangChain buffer |
| Semantic | Facts, definitions, accumulated knowledge | Vector DB or knowledge graph | Pinecone, Weaviate, Mem0 Semantic API |
| Procedural | Skills, rules, behavioural instructions | System prompts, agent code, LLM weights | System prompts, LangChain tool definitions |
| Organisational Context | Governed enterprise definitions, lineage, policies, entity identity | Metadata graph / context layer | Atlan Context Layer, governed data catalog |
Academic grounding: CoALA framework, Princeton (arXiv:2309.02427, 2023) — canonical reference for types 1–4. Type 5 supported by arXiv:2603.17787 (Governed Memory, March 2026).
What are the four types of AI agent memory?
Permalink to “What are the four types of AI agent memory?”The four standard types are in-context memory (the active context window), episodic memory (records of past interactions), semantic memory (factual world knowledge), and procedural memory (rules and skills). The CoALA paper (Princeton, 2023) formalised this taxonomy from cognitive science for language model agents. Together they cover most of what a chatbot or assistant-style agent needs.
Why the taxonomy has staying power
The cognitive science roots run deep. Endel Tulving’s 1972 distinction between episodic and semantic memory gave AI researchers a ready-made framework. Larry Squire added procedural memory in 1987. Baddeley and Hitch formalised working memory in 1974. When the CoALA paper translated all three into an AI agent architecture framework in 2023, it gave the field a language it has largely accepted.
CoALA organises agents along three dimensions: information storage, action space, and decision-making loop. The memory taxonomy sits inside the information storage dimension: working memory for immediate context, and three long-term stores for events, facts, and procedures. IBM, MongoDB, LangChain, Letta, and Mem0 all use a version of this model in their documentation. For a deeper look at what this taxonomy assumes about the knowledge substrate, see Metadata Layer for AI.
A December 2025 survey, “Memory in the Age of AI Agents” (arXiv:2512.13564), notes the field is fragmenting as research expands. Consolidation pathways from episodic to semantic memory, the transition to in-weights implicit knowledge, and multi-agent memory governance are all active areas. The four-type model is a starting point, not a final answer.
The question the rest of this article answers: what does this taxonomy miss for enterprise agents operating against governed data estates?
In-context memory: the agent’s working scratchpad
Permalink to “In-context memory: the agent’s working scratchpad”In-context memory is the context window — everything the LLM processes in a single inference: system prompt, conversation history, retrieved chunks, and tool outputs. It is the only memory the model directly reasons over. All other memory types must be retrieved into the context window to influence generation. Context engineering is the discipline of deciding what to put there.
What in-context memory stores
Permalink to “What in-context memory stores”In-context memory holds everything active during a single inference call:
- The
system_prompt— instructions, persona, routing rules, and constraints - Conversation history for the current session
- Retrieved content from external memory stores: chunks from episodic, semantic, and procedural stores
- Tool call outputs (query results, API responses, search results)
- Chain-of-thought reasoning steps and partial scratchpad work
A typical enterprise context window might include: a user_query field, a tool_result block from a database query, a retrieved_chunk with a relevance_score, and an injected certified_metric_definition pulled from the business glossary.
Architectural role and the context window as bottleneck
Permalink to “Architectural role and the context window as bottleneck”In-context memory is both the integration point and the bottleneck. Every other memory type competes for space here. When you add retrieval from episodic, semantic, and procedural stores, plus injected governance context, the window fills fast.
The context window size is not the core problem, however. Knowing what governed context to retrieve is. BrightEdge research finds that the average enterprise AI query is 23 words versus 4 for traditional search — context must be proportionally richer to answer these queries accurately.
Letta’s architecture treats the context window as RAM and external storage as disk. Agents move data between tiers using tool calls, which Letta calls “memory self-management.” This is the right mental model: the window is finite working space, not the primary store.
Implementation tools for in-context memory
Permalink to “Implementation tools for in-context memory”All LLMs manage in-context memory natively. Letta’s Core Memory blocks (approximately 2,000 characters each) are agent-managed and always in-context. LangChain’s ConversationBufferMemory injects history directly into the prompt.
The enterprise problem is different from the tooling problem. The question is not which framework manages the context window. The question is which governed definition, which lineage chain, and which access policy to retrieve into the window for each query. That requires a layer outside the LLM, upstream of inference.
Episodic memory: what the agent has seen before
Permalink to “Episodic memory: what the agent has seen before”Episodic memory stores records of past events and interactions, enabling an agent to reference prior sessions, recall what it queried before, or surface what a user said last week. It mirrors Tulving’s (1972) autobiographical memory concept. In practice, episodic memory is the agent’s conversation history extended beyond the context window into searchable external storage.
What episodic memory stores and how it works
Permalink to “What episodic memory stores and how it works”Episodic memory captures the “what, when, and where” of agent interactions:
- Past conversation turns with timestamps and session IDs
- Prior agent actions and tool calls with their outcomes
- Interaction metadata: referenced data assets, query versions, result summaries
- For enterprise agents: records of which version of a dbt model was queried, what result it returned, and whether that result was flagged for quality issues
The February 2026 position paper “Episodic Memory is the Missing Piece for Long-Term LLM Agents” (arXiv:2502.06975) argues that episodic reflection and consolidation — converting past events into compact, reusable representations — is the key mechanism for long-term reasoning. Your agents get smarter over time not by storing more, but by consolidating what they store.
Current episodic memory implementations
Permalink to “Current episodic memory implementations”Letta’s Recall Memory stores full conversation history with date and text search. It is always out-of-context by default and retrieved on demand. Mem0’s Episodic Memory API stores events with timestamps using hybrid vector plus graph retrieval. LangChain offers ConversationSummaryMemory and ConversationBufferWindowMemory for hot-path and background extraction respectively.
All of these are conversation-centric implementations. They record what users said and what agents responded — which is the right design for assistant-style agents.
The enterprise gap: data-event episodic memory
Permalink to “The enterprise gap: data-event episodic memory”Enterprise episodic memory requires something the current frameworks do not provide. Your team needs to know not just what a user asked last Tuesday, but when revenue_mrr was deprecated, when a metric definition changed, when a data quality incident occurred on the Snowflake pipeline that feeds a report.
This is an operational audit log tied to data assets — closer to a governed event timeline than a conversation record. No current memory framework natively surfaces data-event history as retrievable episodic memory for agents. When your agent queries a metric that was redefined three months ago, it has no way to know — unless that event lives somewhere it can retrieve.
Semantic memory: what the agent knows
Permalink to “Semantic memory: what the agent knows”Semantic memory stores factual knowledge — definitions, concepts, rules, and accumulated facts the agent uses for reasoning. It originates in Tulving’s (1972) semantic memory: general world knowledge not tied to specific experiences. In LLM agents, semantic memory exists both in pre-trained model weights and in external storage for organisation-specific facts the model was not trained on.
What semantic memory stores
Permalink to “What semantic memory stores”Semantic memory holds the “what things are” layer of agent knowledge:
- User preference facts extracted from episodic consolidation
- Domain-specific definitions and terminology for your organisation
- Entity properties: what
customer_idmeans in your CRM versus whatacct_idmeans in Snowflake - Business rules and policies encoded as retrievable facts
- A stored
certified_revenue_definitionthat reads: “revenue = net of returns, excluding deferred revenue, per Finance approval 2026-01-15”
Implementation tools for semantic memory
Permalink to “Implementation tools for semantic memory”Vector databases including Pinecone, Weaviate, and pgvector provide semantic search over stored facts. Knowledge graphs add relationship structure. Mem0’s Semantic Memory API stores and retrieves facts through hybrid vector plus graph traversal. Letta’s Archival Memory is an explicit knowledge store with semantic searchability.
The performance numbers are worth noting. The Mem0 research paper (arXiv:2504.19413, April 2026) benchmarks its memory layer at 91% lower p95 latency and 90% token cost savings versus naive context stuffing, with a 26% improvement over OpenAI’s default memory approach. Retrieval architecture matters more than storage volume.
The enterprise gap: governance state as the missing dimension
Permalink to “The enterprise gap: governance state as the missing dimension”Standard semantic memory stores facts as unstructured or weakly structured text. This is fine for “what is the capital of France.” It breaks for enterprise use cases.
The difference between “a definition of revenue” and “the certified net_revenue_q4 definition approved by Finance on 2026-01-15, version 3.2” is not a volume difference. It is a governance difference. Current semantic memory frameworks have no governance state, no certification flag, and no version history on stored facts.
When your agent retrieves a metric definition and uses it to answer a board-level question, you need to know: was that definition current? Was it certified? Who approved it? Has it changed since the last time the agent used it? None of the standard semantic memory frameworks answer these questions. That is not a retrieval problem — it is an architectural one. For more on this distinction, see Context Layer vs Semantic Layer.
Procedural memory: how the agent behaves
Permalink to “Procedural memory: how the agent behaves”Procedural memory stores skills, rules, and behavioural instructions — the “how to act” layer. It originates in Squire’s (1987) non-declarative memory: implicit habits and procedures difficult to articulate verbally. In AI agents, procedural memory typically lives in system prompts, tool definitions, and routing logic. It is the least discussed memory type because it is often built directly into agent architecture.
Procedural memory is the most under-theorised of the four types, which is ironic. It governs everything the agent does.
What procedural memory stores
Permalink to “What procedural memory stores”Procedural memory holds the behavioural rules your agent follows:
- System prompt instructions (agent persona, response constraints, formatting rules)
- Tool call routing logic: which tool to call for which user intent
- Decision trees and escalation rules for edge cases
- Business process rules: “pricing questions must use
certified_pricing_v3,” “cross-border transactions trigger compliance review,” “win rate must come fromsales_certified, notmarketing_pipeline” - LLM in-weights skills from pre-training or fine-tuning — implicit, not explicitly stored
Current implementations and the CoALA substrate model
Permalink to “Current implementations and the CoALA substrate model”System prompts are universal across all agent frameworks. Letta encodes agent personality and instructions as always-in-context blocks. LangChain’s tool definitions and agent executors embed routing logic as code.
CoALA identifies three substrates for procedural memory: embedded in LLM weights (training), written in agent code, or stored as explicit instruction sets. This is an important distinction for enterprise architects. In-weights procedural knowledge cannot be updated without retraining. Code-embedded routing cannot be updated without a deployment. Only explicit instruction sets — system prompts and managed rule libraries — can be updated without touching the model or the code.
The enterprise gap: per-agent procedural memory cannot scale
Permalink to “The enterprise gap: per-agent procedural memory cannot scale”Your current procedural memory is almost certainly hard-coded per agent or stored as ad hoc prompts. This creates a governance problem that compounds over time. When a data access policy changes, every agent consuming that policy must be updated independently. When a routing rule changes, the change must propagate across every system prompt that encodes it.
Enterprise governance requires that procedures be versioned, auditable, and centrally managed. An access_policy or data_contract is a procedural memory object — it tells the agent how to behave. But no current framework manages these as governed procedural memory. For a practical inventory of what breaks when it does not, see Common Context Problems Data Teams Face Building Agents.
The missing type: organisational context memory
Permalink to “The missing type: organisational context memory”Organisational context memory is the governed, persistent, cross-system memory of an enterprise’s data estate: what assets exist, what they mean, who owns them, what they are certified for, how they relate across systems, and under what conditions they may be accessed. It is not conversation history, world knowledge, a skill, or the current context window. It is the institutional memory of the data estate.
The current AI overview for “types of AI agent memory” presents four types as exhaustive. That framing is accurate for assistant-style agents. It is not accurate for enterprise data agents.
Why the standard taxonomy misses organisational context
Permalink to “Why the standard taxonomy misses organisational context”CoALA and all major frameworks were designed for agents that interact with users. The memory types they define address: what the user said before (episodic), general facts about the world (semantic), how to respond (procedural), and what is happening now (in-context).
Enterprise data agents face a fundamentally different requirement. Their memory substrate is the data estate itself — with governance, lineage, quality signals, ownership structures, and certified definitions that no user session contains.
One valid objection here is worth addressing: couldn’t “organisational context memory” just be enriched semantic memory? You could extend your vector database to store certified definitions instead of plain ones. You could add metadata fields for governance state.
The architectural requirements disqualify this framing. Governance state is not a stored fact — it is a live property maintained by approval workflows, with version history and machine-enforceable constraints. Data lineage is a structural graph property, not a semantic fact about an asset. Cross-system entity identity is a dynamic mapping layer, not a stored claim. Access policy is a runtime enforcement mechanism, not something you retrieve and then decide whether to follow. You cannot meet these requirements by extending a semantic memory tier. They require a dedicated infrastructure layer — which is what separates a memory layer from a context layer.
What organisational context memory contains
Permalink to “What organisational context memory contains”Five components map directly to the gaps in each standard type:
| Gap in standard type | What organisational context memory adds |
|---|---|
| Semantic memory has no governance state | Certified definitions with version history, approval timestamps, ownership records |
| Episodic memory is conversation-centric | Data-event history: deprecations, quality incidents, lineage changes, metric_definition re-definitions |
| Procedural memory is per-agent and unversioned | Governance policies, data_contracts, access controls — centrally managed and machine-enforceable |
| In-context memory retrieves whatever is available | Runtime injection of governed context: user identity, applicable policies, data freshness, lineage chain |
| No standard type covers cross-system identity | Entity resolution across CRM, billing, support: same real-world customer, different customer_id per system |
Evidence the gap is real
Permalink to “Evidence the gap is real”The practitioners building enterprise agents are not finding a conversation recall problem. They are finding a definition and governance problem.
Joe DosSantos, VP Enterprise Data & Analytics at Workday, described it directly: “We built a revenue analysis agent and it couldn’t answer one question. We started to realize we were missing this translation layer.” The missing layer was organisational context, not conversation history. No upgrade to episodic or semantic memory would have fixed it.
Snowflake’s internal research, adding an ontology layer (a form of organisational context memory) to their agents, produced a 20% improvement in agent answer accuracy and a 39% reduction in tool calls. The ontology is entity identity mapping across systems — a component of organisational context memory that does not exist in any standard memory type.
The March 2026 paper “Governed Memory: A Production Architecture for Multi-Agent Workflows” (arXiv:2603.17787) identifies five structural failures when enterprise agents have no shared governed memory: memory silos, governance fragmentation, unstructured memories unusable by downstream systems, redundant context delivery, and silent quality degradation. This is the failure mode, documented.
Gartner’s 2026 D&A predictions frame context as “the new critical infrastructure” and project 60% of AI projects will be abandoned due to context and data readiness gaps — not model capability gaps. The problem is the infrastructure layer, not the model.
What Atlan’s context layer provides as organisational context memory
Permalink to “What Atlan’s context layer provides as organisational context memory”Atlan’s context layer is built specifically to close this gap. Its components map to each dimension of organisational context memory:
- Enterprise Data Graph: metadata from 100+ systems unified into a queryable graph — the structural backbone
- Governed business glossary: certified definitions with version history and ownership;
certified_revenue_definitionwith approval timestamp - Column-level lineage: end-to-end provenance across Snowflake, Databricks, dbt, BI tools — from answer back to source table
- Active ontology: cross-system entity identity resolution —
customer_idin CRM equalsacct_idin Snowflake equalscontact_idin Salesforce - Active metadata lakehouse (Iceberg-native): event streams, decision traces, continuous ingestion — decision memory that persists across agent sessions
- MCP server and Context Studio: surfaces governed context to any AI tool at inference time; bootstraps from existing dashboards and query history
Atlan’s position on this is precise: “We believe the right answer is not to embed context into individual agents — that fragments knowledge and creates inconsistency. Instead, we’re building a universal context layer: a shared, living source of truth that any AI agent can draw from.”
This is the architectural difference. A memory layer is embedded per agent. A context layer is shared across all agents, governed by the organisation, and maintained as an enterprise-wide source of truth. For more on how active metadata underpins this, see the active metadata management page. The agent context layer guide walks through the full architecture.
How the five types work together in production agents
Permalink to “How the five types work together in production agents”In a production agent, the five memory types form a layered retrieval pipeline: procedural memory defines the routing rules, semantic and episodic memory surface relevant facts and history, organisational context memory injects governed enterprise definitions and policies, and in-context memory assembles the final context window for each inference call. Failures in any layer propagate as hallucinated or ungoverned responses.
The retrieval flow: a concrete example
Permalink to “The retrieval flow: a concrete example”Consider the query: “What was Q4 win rate by segment?”
- Procedural memory determines routing:
win_ratequeries must use thesales_certifieddataset perrouting_rule_017 - Semantic memory retrieves the certified
win_ratedefinition - Episodic memory checks prior session: was this query run before, and did it surface a data quality incident?
- Organisational context memory injects: column-level lineage for
win_rate, certification status, and applicable access policies for this user - In-context memory assembles all retrieved content into the context window
- The LLM generates a response grounded in governed context
- The response includes provenance: sourced from
sales_certified.win_rate_v3, certified 2026-01-10
Without organisational context memory, step 4 fails. The agent uses whichever win_rate column it finds via semantic search — no certification check, no lineage, no policy enforcement. With organisational context memory, the agent uses the governed definition, applies the correct access policy, and can trace the answer back to source tables.
This is the difference between a “memory layer” and a context layer. Memory layers are retrieval architectures. Context layers are governance architectures. Context-aware AI agents are built on the second, not the first.
Memory layer vs context layer: a direct comparison
Permalink to “Memory layer vs context layer: a direct comparison”| Capability | Memory Layer (4 types) | Context Layer (5 types + governance) |
|---|---|---|
| Conversation recall | Yes | Yes |
| Factual knowledge retrieval | Yes | Yes (plus certification state) |
| Skill and rule execution | Yes | Yes (plus versioned, auditable policies) |
| Enterprise definition governance | No | Yes — governed business glossary |
| Data lineage and provenance | No | Yes — column-level lineage |
| Cross-system entity identity | No | Yes — active ontology |
| Access policy enforcement at runtime | No | Yes — governance policies |
| Multi-agent shared memory governance | No | Yes — enterprise-wide context layer |
For the full discipline of building and managing this retrieval pipeline, see What Is Context Engineering?.
Enterprise implications: which types matter most for data teams
Permalink to “Enterprise implications: which types matter most for data teams”For data teams building agents that query live data estates, semantic memory and organisational context memory are the most consequential types. Semantic memory stores the definitions your agents reason from. Organisational context memory governs those definitions, tracks their lineage, and enforces who can access what. Without these two types working together, even technically sound agents produce ungoverned, unauditable answers.
Priority matrix for enterprise data agents
Permalink to “Priority matrix for enterprise data agents”Must have — these determine whether your agent produces trustworthy output:
- Organisational context memory: governed definitions, lineage, and policies for the entire data estate
- Semantic memory: business glossary facts, metric definitions — certified, not just retrievable
- In-context memory: context engineering discipline to inject the right governed context per query
Should have — these improve reliability and auditability over time:
- Episodic memory: decision audit trail — which agent made which call, against which data version, when
- Procedural memory: routing rules encoded centrally, not per agent, so policy updates propagate automatically
Use case dependent — evaluate against your specific deployment:
- Conversation-centric episodic memory: critical for user-facing copilots, less important for batch analytics agents
- General-world semantic memory: largely covered by LLM pre-training; enterprise-specific definitions are the real gap
Governance as the architectural difference
Permalink to “Governance as the architectural difference”The numbers on this are unambiguous. Gartner (2025) projects that by 2030, 50% of enterprise AI agent deployment failures will be due to insufficient AI governance platform runtime enforcement — not capability gaps.
Gartner’s 2026 predictions add a sharper warning: 60% of agentic analytics projects relying solely on MCP will fail by 2028 without semantic foundations. The MCP protocol solves the connectivity problem. It does not solve the governance problem.
Deloitte’s 2026 Enterprise AI survey found that despite $30-40B spent on enterprise generative AI, 95% of organisations saw no measurable ROI. The structural context gap is the leading diagnosis — not model quality, not tooling maturity.
The memory versus context layer distinction matters at the budget level, not just the architecture level. Investing in four-type memory frameworks is investing in retrieval. Investing in a context layer is investing in governance infrastructure. For the commercial dimension of this, see Closing the Context Gap and Gartner on Context Graphs.
Wrapping up
Permalink to “Wrapping up”The four-type taxonomy from CoALA (Princeton, 2023) is legitimate and well-grounded for assistant-style agents. Teach it to every team member building agent systems — it is the right starting point.
The taxonomy was not designed for enterprise data agents operating against governed data estates. Each type has a specific gap at the enterprise layer: semantic memory lacks governance state, episodic memory is conversation-centric rather than data-event-centric, procedural memory is per-agent rather than centrally governed, and in-context memory requires knowing what governed context to retrieve.
The fifth type — organisational context memory — closes these gaps. It brings governed definitions, lineage, entity identity, and policy enforcement into a shared enterprise-wide layer that any agent can draw from.
The distinction is architectural, not philosophical. Memory frameworks are retrieval architectures. Context layers are governance architectures. Building the former when you need the latter is the leading cause of enterprise AI agent failures — more than model quality, more than tooling gaps.
As enterprise AI agent deployments scale, the governance of shared memory becomes the critical infrastructure layer. The question is not which memory framework your agent uses. The question is whether your organisation has the governed context layer that makes any memory framework trustworthy.
External citations: CoALA (Princeton, 2023) | Memory in the Age of AI Agents (arXiv:2512.13564) | Episodic Memory is the Missing Piece (arXiv:2502.06975) | Mem0 production paper (arXiv:2504.19413) | Governed Memory (arXiv:2603.17787) | Snowflake Agent Context Layer blog | Gartner D&A 2026 predictions
FAQs about AI agent memory
Permalink to “FAQs about AI agent memory”1. What are the four types of AI agent memory?
Permalink to “1. What are the four types of AI agent memory?”The four types are in-context (working) memory — the active context window the model reasons over; episodic memory — records of past events and interactions; semantic memory — factual knowledge, definitions, and accumulated world knowledge; and procedural memory — skills, rules, and behavioural instructions. The CoALA framework (Princeton, arXiv:2309.02427) formalised this taxonomy from cognitive science for language model agents in 2023.
2. What is the difference between semantic and episodic memory in AI agents?
Permalink to “2. What is the difference between semantic and episodic memory in AI agents?”Semantic memory stores general facts and definitions — what things are, independent of when or where they were learned. Episodic memory stores specific past events tied to time — what happened, when, in which session. Semantic memory is queried for “what does revenue mean”; episodic memory is queried for “did this agent run this revenue query before and what did it return?” Both are long-term external stores retrieved into the context window on demand.
3. How does procedural memory work in AI agents?
Permalink to “3. How does procedural memory work in AI agents?”Procedural memory encodes how the agent should behave — routing rules, response instructions, tool-call logic. It lives primarily in system prompts, agent code, and LLM pre-training weights rather than in a retrievable database. Unlike episodic and semantic memory (which are explicitly retrieved), procedural memory is typically implicit: the agent follows its instructions without consciously “looking them up.” Enterprise agents need procedural memory to be versioned and centrally managed, not hard-coded per agent.
4. What is in-context memory in AI and how is it different from external memory?
Permalink to “4. What is in-context memory in AI and how is it different from external memory?”In-context memory is everything currently inside the LLM’s context window — system prompt, conversation history, retrieved chunks, and tool outputs — processed directly during inference. External memory (episodic, semantic, and procedural stores) lives outside the model and must be retrieved and injected into the context window to influence generation. The context window is the only memory the model directly reasons over; all other types are retrieval substrates.
5. How do AI agents remember things between sessions?
Permalink to “5. How do AI agents remember things between sessions?”Agents persist cross-session information by writing to external memory stores before the session ends and retrieving relevant entries at the start of a new session. Episodic memory stores conversation history with timestamps; semantic memory stores extracted facts; procedural memory stores updated instructions. Frameworks like Letta, Mem0, and LangChain provide APIs for this. Without explicit cross-session storage, agents have no memory between sessions — the context window resets at each new conversation.
6. What tools are used to implement AI agent memory?
Permalink to “6. What tools are used to implement AI agent memory?”Common tools: Letta (three-tier Core/Recall/Archival memory with self-management), Mem0 (universal memory layer with episodic, semantic, and procedural APIs; 91% lower latency versus naive context stuffing), LangChain/LangMem (hot-path and background memory extraction modes). For enterprise organisational context memory, data catalog platforms with governed metadata graphs — such as Atlan’s context layer — are required. No general-purpose memory framework currently provides governed enterprise definitions, data lineage, or policy enforcement natively.
7. Why do enterprise AI agents fail even with good memory systems?
Permalink to “7. Why do enterprise AI agents fail even with good memory systems?”Enterprise agents fail because the four standard memory types were designed for chatbot-style agents, not data agents querying live data estates. A well-implemented four-type memory system cannot provide governed metric definitions, column-level data lineage, cross-system entity identity resolution, or runtime access policy enforcement — the components Gartner calls “context and data readiness.” Gartner (2026) predicts 60% of AI projects will be abandoned through 2026 due to these gaps, not model quality failures.
What is organisational context memory and how is it different from standard agent memory?
Organisational context memory is the governed, persistent memory of an enterprise’s data estate: certified metric definitions, data lineage graphs, cross-system entity identity maps, and machine-enforceable access policies. It differs from standard memory types because it requires governance state, version history, and runtime enforcement — properties that cannot be modelled as flat semantic facts or conversation records. It is the fifth type beyond the CoALA taxonomy, required for enterprise data agents operating against live governed data.
What is a context layer and how is it different from a memory layer?
A memory layer (the four standard types) is a retrieval architecture — it stores and surfaces information when queried. A context layer is a governance architecture — it adds certification state, lineage, access policy enforcement, and cross-system identity resolution to what the agent retrieves. Memory layers are appropriate for assistant-style agents. Enterprise data agents need a context layer: memory plus governance plus provenance, shared across all agents in the organisation rather than embedded per agent.
Share this article
