A note before we begin: The term “cold start” has an older, well-documented life in machine learning — specifically in recommender systems, where it describes the difficulty of predicting recommendations for a new user or new item with no prior interaction history. That problem is real and has its own literature. This article is not about that. This article is about a different and more urgent version of the cold-start problem: the one that causes enterprise AI agents to fail in production within days or weeks of deployment. The two problems share a name but require entirely different solutions. If you arrived here looking for collaborative filtering cold start, this page will not help you. If you arrived here because your agents are failing in production despite a capable model, read on.
Quick facts
Permalink to “Quick facts”| What It Is | A two-layer problem: session cold start (no memory of prior conversations) and organizational cold start (no knowledge of the organization’s data estate) |
|---|---|
| Who It Affects | Any team deploying AI agents against internal enterprise data |
| Session Cold Start Fix | Memory layer tools — Mem0, Zep, LangMem, vector stores |
| Organisational Cold Start Fix | Context layer — governed metadata, business glossary, lineage, ontology |
| Evidence of Scale | 95% of enterprise AI pilots deliver zero measurable ROI (MIT, 2025); 40%+ of agentic AI projects to be canceled by 2027 (Gartner) |
| Time Lost Without a Context Layer | Teams spend 2–3 weeks manually encoding business rules per AI initiative before agents can function |
| Snowflake Data Point | +20% answer accuracy, −39% tool calls from adding an organizational ontology (Snowflake, March 2026) |
The session cold start: what it is and how memory layers solve it
Permalink to “The session cold start: what it is and how memory layers solve it”The session cold start occurs when an AI agent begins each new conversation with no recollection of prior sessions. It cannot remember what data sources were reviewed, what conclusions were reached, or what user preferences were established. Memory layer tools — Mem0, Zep, LangMem, and vector store frameworks — address this by persisting conversation history across sessions.
Each session starts with a blank context window. The agent has no access to the outcomes of prior conversations unless a memory layer explicitly injects that history at session start. In practice, this looks like: “The agent gave me the same wrong answer it gave me last week.” Or: “I have to re-explain my role every time I open a new chat.” This is a persistence problem, not a knowledge problem. The agent is not ignorant of your organization — it simply cannot access what it previously learned.
Memory layer tools address this through a persistent store — a vector database, key-value store, or structured record — of session summaries, user preferences, and prior decisions. At session start, relevant memories are retrieved and injected into the context window. The agent now has continuity. Mem0’s user_id-scoped memory and Zep’s episodic memory framework are two mature implementations of this approach. The memory layer ecosystem is well-developed; session cold start is largely a solved problem for teams that invest in it.
Most current SERP content about the “AI agent cold start problem” focuses here. Memory platform vendors market directly against this framing: “agents that remember between conversations.” This is accurate and useful. But it conflates session cold start with a harder, second problem that memory tooling was never designed to solve. Understanding what is agentic AI — including the three-stage failure sequence of cold start, testing hell, and scaling wall — shows why the three failure patterns enterprises hit without context infrastructure all trace back to this conflation: teams solve the wrong layer and wonder why their agents still fail.
What Session Cold Start Looks Like in Practice
Permalink to “What Session Cold Start Looks Like in Practice”A data analyst agent built on LangChain with no memory layer receives the same question every Monday: “What are this week’s pipeline metrics?” Each Monday, the agent has no memory that it ran this query last week, produced a result, and received a correction: “Use pipeline_stage_c not pipeline_stage_old — the old field was deprecated in December.” The agent makes the same mistake again next week. A session memory layer fixes this by persisting the correction.
What a session memory layer resolves:
- Repeated explanations of user role and preferences
- Re-stating the same corrections session after session
- Loss of intermediate reasoning from prior sessions
- User-level context — timezone, preferred metric formats, decision history
This is the category of cold start the current market has largely solved. The second category has not been solved. In fact, it has barely been named.
The organizational cold start: the real enterprise problem
Permalink to “The organizational cold start: the real enterprise problem”The organizational cold start is the condition in which an AI agent, even one with perfect session memory, has never been introduced to the organization’s data estate. It does not know the business’s definition of revenue, which tables are canonical, how data lineage flows, or what governance rules apply — information no session memory tool supplies.
The organizational cold start is not about forgetting. It is about never having known. When an enterprise deploys a new agent, that agent arrives with general-purpose world knowledge — and zero knowledge of this specific organization. It does not know that net_revenue_recognised is the canonical revenue field and gross_rev_2019 is deprecated. It does not know that “sales” means net sales in the finance team’s vocabulary and gross sales in the ops team’s vocabulary. It does not know that your fiscal year ends January 31st, not December 31st. None of this information exists in any model’s training data. None of it can be injected by a session memory tool that has never seen it either.
The clearest real-world illustration comes from Joe DosSantos, VP Enterprise Data and Analytics at Workday. His team built a revenue analysis agent. The model was capable. The infrastructure was ready. The agent could not answer a single question. His diagnosis: “We started to realize we were missing this translation layer. We had no way to interpret human language against the structure of the data.” The model was not broken. The organizational context was absent. This is organizational cold start in its clearest form — the agent was deployed with no introduction to the organization it was meant to serve. A Mem0 memory layer would not have helped. There were no sessions to remember.
Memory tools store and retrieve what happens in sessions. They cannot supply what was never present in a session in the first place. If the business glossary, lineage map, and metric definitions were never fed to the agent — not in any session, not in any training run — no memory retrieval will surface them. Some teams partially work around this by manually loading business definitions into organizational memory tiers in tools like Mem0 — but this requires the same manual curation effort as prompt engineering, still lacks governed lineage and deprecation signals, and breaks the principle that organizational context should be built once and inherited by every agent automatically. Understanding why context engineering differs from prompt engineering makes the scope boundary clear: prompt engineering shapes individual queries, while context engineering shapes the knowledge infrastructure agents inherit. Understanding what an agent context layer is shows the architectural solution: context layer infrastructure supplies the organizational knowledge that precedes any session. Memory tools extend that knowledge forward in time. Neither can substitute for the other.
What Organisational Cold Start Looks Like in Practice
Permalink to “What Organisational Cold Start Looks Like in Practice”A retail data leader describes the failure mode directly: “The failure today is the learning curve — did I state something that’s not explicitly in the data? When I say sales, is it net sales? Gross sales? There are different qualifications.” The agent is not malfunctioning. It is cold to the organization’s definitional vocabulary. Every query requires manual disambiguation that an agent with access to a governed business glossary would not need.
The specific problems cold start creates for data teams compound quickly:
- Teams spend 2–3 weeks manually writing field definitions and business rules before each new AI initiative can function
- This work is often stale before the pilot reaches production
- Reproducing it for a second use case requires the same manual effort from scratch
- One enterprise team ran over 1,000 manual test cases over five months to validate a single agent — because there was no governed source of truth to compare outputs against
Session Cold Start vs. Organisational Cold Start
Permalink to “Session Cold Start vs. Organisational Cold Start”The table below shows why these are architecturally distinct problems, not two versions of the same problem.
| Dimension | Session Cold Start | Organisational Cold Start |
|---|---|---|
| What is missing | Memory of prior conversations | Knowledge of the organization |
| Scope | User-level or session-level | Enterprise-level |
| Duration of problem | Resolves once memory layer is in place | Persists until context layer is built |
| Tools that address it | Mem0, Zep, LangMem, vector stores | Context layer, data catalog, semantic layer, ontology |
| Can be inherited across agents? | No — built per session | Yes — built once, consumed by every agent |
| Atlan’s position | Not Atlan’s play | Core Atlan infrastructure |
Why the organizational cold start is the harder problem
Permalink to “Why the organizational cold start is the harder problem”The organizational cold start is harder because it cannot be resolved by tooling alone — it requires building a governed, machine-readable representation of the organization’s entire data estate. The Snowflake ontology experiment (+20% accuracy, −39% tool calls) and enterprise onboarding timelines of 8–24 weeks demonstrate that context infrastructure, not model quality, is the binding constraint on agent performance.
The Snowflake Ontology Experiment
Permalink to “The Snowflake Ontology Experiment”The strongest technical proof of organizational cold start comes from Snowflake’s internal experiment, published by Josh Klahr and Rajhans Samdani in March 2026. Their team added a plain-text “data ontology” — join keys, table grains, cardinality and fanout hints — to an agent that was already receiving Snowflake semantic views. The agent already had structured, curated data access. It was not a poorly configured agent.
The result of adding the organizational context layer: final answer accuracy improved by +20% and average tool calls decreased by ~39%. End-to-end latency improved as well. The model could not infer these relationships from the data alone. Understanding how ontologies supply agent context explains why: an ontology encodes the relationships and meanings that exist in human understanding of the data — exactly the knowledge that an organizationally cold agent lacks. The cold start ended only when the context layer was added, not when data access was improved (Snowflake blog, March 2026).
The Enterprise Failure Data
Permalink to “The Enterprise Failure Data”The Snowflake result is not an isolated finding. MIT’s “GenAI Divide” report found that 95% of enterprise AI pilots delivered zero measurable P&L impact — with root causes identified as “brittle workflows, lack of contextual learning, and misalignment with day-to-day operations, not model quality” (MIT, August 2025). The phrase “lack of contextual learning” is a description of organizational cold start. The agents were not learning the organization.
Gartner predicts that over 40% of agentic AI projects will be canceled by end of 2027, citing escalating costs and unclear business value (Gartner, June 2025). Bain closes the loop: 80% of AI use cases met technical expectations, yet only 23% of companies tied them to measurable revenue or cost reduction (Bain, November 2025). The gap between “working in demos” and “delivering value in production” is the architectural gap vs. model gap — and that architectural gap is cold start.
Gartner also found that 63% of organizations either lack or are unsure whether they have the right data management practices for AI (Gartner, February 2025). Without AI-ready data infrastructure, no agent deployment escapes organizational cold start. The data is not in a form the agent can use.
Why Organisational Cold Start Compounds Over Time
Permalink to “Why Organisational Cold Start Compounds Over Time”Session cold start is a linear problem — solve it once per session, with a memory layer. Organisational cold start compounds. Every agent deployed without a context layer requires the same manual bootstrapping effort. Every metric redefined by a human without updating the context layer creates new divergence. Every new data source added without lineage documentation extends the cold start for every future agent.
The only escape from compounding is building context infrastructure that every agent inherits automatically. Until that infrastructure exists, your team pays cold-start tax on every single deployment.
The Onboarding Timeline Evidence
Permalink to “The Onboarding Timeline Evidence”The cold-start tax shows up clearly in onboarding timelines:
- Enterprise agentic AI onboarding spans 8–24 weeks for initial proof-of-concept phases
- Knowledge base development alone consumes 3–4 weeks before agents can function at all
- Complex environments with legacy systems or regulatory requirements add 1–4 weeks further
- One Atlan customer in insurance reduced a projected 12-month build to approximately 1 month using Context Studio — the time saved is almost entirely cold-start resolution work
The 8–24 week onboarding timeline is not an engineering problem. It is not a skills gap. It is cold-start tax — the cost your team pays every time an agent starts without organizational context pre-built. Until that context is treated as infrastructure rather than a per-project deliverable, the tax repeats with every deployment.
Why the Current SERP Consensus Gets This Wrong
Permalink to “Why the Current SERP Consensus Gets This Wrong”The current search results for “AI agent cold start problem” fall into two categories. The first treats cold start as a statistical learning problem — the recommender system cold start, which is a different problem entirely. The second treats it as a session persistence problem, correctly solved by memory layers.
Neither framing identifies the enterprise blocker: contextual ignorance of the organization, not memory failure. This is not a minor distinction. The architectural implication is significant. Organisations that invest only in memory layers will still hit the organizational cold start wall every time they deploy a new agent against their data. The LangChain State of Agent Engineering survey found that for enterprises with 10,000+ employees, hallucinations and output consistency — not tool use or orchestration — are the dominant engineering concerns. These are symptoms of organizational cold start: agents that do not know what the organization knows.
How enterprises are solving the organizational cold start
Permalink to “How enterprises are solving the organizational cold start”Enterprises solving organizational cold start build a context layer — a governed, machine-readable representation of the data estate — before deploying agents. The core components are: a business glossary with owned metric definitions, data lineage from source to consumption, semantic tags, and pre-seeded agent onboarding patterns that give every new agent immediate organizational awareness.
Component 1: Pre-Seeded Metadata and Business Glossary
Permalink to “Component 1: Pre-Seeded Metadata and Business Glossary”The foundational element is a data estate represented in a form agents can consume before any session begins: field-level descriptions, canonical versus deprecated status flags, business glossary entries for ambiguous terms (revenue, active user, churn), and ownership metadata. This is what the Workday team did not have. Without it, every new agent is organizationally cold.
The key capabilities this requires:
- Governed definitions:
net_revenue_recognisedmapped to its business definition, owner, and calculation logic - Deprecation signals: Fields marked deprecated are excluded from agent reasoning automatically, not discovered through failure
- Vocabulary disambiguation: Agent understands that
salesin finance context means net sales; in ops context means gross sales
This is not a documentation project. It is infrastructure. Built once, it serves every agent your team ever deploys.
Component 2: Data Lineage as Agent Orientation
Permalink to “Component 2: Data Lineage as Agent Orientation”Lineage is not just a governance tool — it is an agent navigation map. When an agent knows that revenue_summary is computed from orders_clean which flows from orders_raw, it can trace anomalies to their source without manual intervention. Without lineage, the agent is cold to the data’s provenance and cannot reason about trust, recency, or reliability.
In dbt terms: when a dbt model is modified, lineage awareness means the agent knows which downstream metrics are affected. Cold to lineage, the agent may confidently surface a stale metric with no signal that it has changed. This is how agents pass staging tests and then fail in production — staging environments do not surface the full lineage picture.
Component 3: Agent Onboarding Patterns — The “Introduction” Model
Permalink to “Component 3: Agent Onboarding Patterns — The “Introduction” Model”The architectural pattern that resolves cold start at scale is a shared context layer that every agent inherits, rather than each agent team manually encoding context from scratch. New agents are “onboarded” to the organization the same way a new employee is: through a structured introduction to the data estate, its rules, its definitions, and its history. Context engineering is the discipline that makes this systematic rather than ad hoc.
What this looks like in practice:
- Context layer pre-loads to every agent’s working context at initialisation
- Business glossary entries, lineage maps, and governance rules are available without prompt engineering
- Agent corrections and human refinements accumulate in the context layer — it improves with every interaction
- One insurance customer estimated a 12x time compression on agent deployment once the context layer was established
Component 4: Where Semantic Layers Stop
Permalink to “Component 4: Where Semantic Layers Stop”Some enterprises already have semantic layers — dbt metrics layer, Cube, Looker. These help but do not fully resolve organizational cold start. A semantic layer defines metric logic; it does not supply governance context, lineage provenance, ownership, deprecation status, or the institutional decisions behind a metric. Understanding where semantic layers stop and context layers begin is important: the context layer wraps the semantic layer with the broader organizational knowledge an agent needs to function correctly.
Atlan’s context layer builds on top of whatever semantic infrastructure you have. It does not replace your dbt metrics layer — it extends it with the governance and institutional context that turns a metrics definition into something an agent can reason about safely.
Measuring cold-start severity in your own environment
Permalink to “Measuring cold-start severity in your own environment”Before investing in context layer infrastructure, assess your cold-start severity. The key signals are: time spent on manual documentation before each AI initiative, agent error rate on domain-specific queries, number of governance incidents caused by agents citing deprecated fields, and the proportion of agent failures attributable to missing organizational knowledge versus model limitations.
Most enterprise AI teams treat agent failures as model quality problems before diagnosing whether the failure is a cold-start problem. Misdiagnosis is expensive. Prompt engineering and model upgrades will not fix a cold-start failure. The first diagnostic question is: “Did this agent fail because it lacked capability, or because it lacked context?” If the answer is context, the framework below will help you size the problem.
Cold-Start Severity Tiers
Permalink to “Cold-Start Severity Tiers”Tier 1: Low severity (session cold start only)
Indicators:
- Your agents operate on well-documented, stable datasets with few ambiguous field names
- Agent failures are primarily about not remembering prior sessions, not about misinterpreting your data
- Your data estate has fewer than 50 active tables with clear, consistent naming
- Agents are used by a single team with shared vocabulary
Fix: A session memory layer (Mem0, Zep) is likely sufficient. Organisational cold start is low risk at this scale.
Tier 2: Moderate severity
Indicators:
- Agents regularly misinterpret ambiguous metrics (
revenue,active users,conversion) - Your data estate includes deprecated fields that agents occasionally surface in answers
- Multiple teams use the same terms to mean different things
- Agent onboarding for each new initiative requires 1–2 weeks of manual documentation
Fix: Begin building a business glossary and lightweight lineage map. Consider a data catalog as the foundation for a context layer.
Tier 3: High severity (the Workday pattern)
Indicators:
- Agents fail on the first production query despite passing all staging tests
- Your team spent more than 2 weeks documenting context before deploying the last agent
- You have run manual validation suites of 500+ test cases because there is no canonical source of truth
- Cross-functional agents serving finance, ops, and product simultaneously give inconsistent answers to the same question
- Cold-start tax repeats with every new agent initiative
Fix: Context layer infrastructure is required. Session memory tools alone will not resolve this pattern. The enterprise context layer is the appropriate investment class. Gartner’s finding that 63% of organizations lack AI-ready data management practices means most enterprises reading this page are operating at Tier 2 or Tier 3.
Quick Self-Assessment Checklist
Permalink to “Quick Self-Assessment Checklist”- [ ] Do your agents know which fields are canonical versus deprecated?
- [ ] Can an agent resolve
revenueto the correct calculation without prompting? - [ ] Does your data estate have documented lineage from source to consumption?
- [ ] Have you run the same manual documentation effort more than once across separate AI initiatives?
- [ ] Do agents behave consistently when different teams ask semantically equivalent questions?
If you answered “no” to three or more: your organizational cold start is unresolved.
Cold-start failure mode: zero context vs. rich context
Permalink to “Cold-start failure mode: zero context vs. rich context”The diagram below illustrates the two deployment paths. The first is the cold-start path — the default for most enterprise deployments today. The second is the context-layer path — where every agent inherits organizational knowledge before its first query.
The cold-start path (left) sends an agent into production data with zero organizational context. The context-layer path (right) pre-loads governed metadata, lineage, and business definitions before the first query — eliminating the failure modes that cause agents to fail in production after working in demos. Accuracy and tool-call stats are drawn from Snowflake’s internal experiment (March 2026); results will vary by environment and data estate complexity.
Wrapping up
Permalink to “Wrapping up”The AI agent cold-start problem is not one problem — it is two, and conflating them is the most expensive mistake enterprise AI teams make. Session cold start is solvable with mature memory tooling; that problem is largely handled by the Mem0 and Zep ecosystem. The organizational cold start — the condition in which an agent has never been introduced to the organization’s data estate — is not handled by any memory layer, no matter how sophisticated.
It requires context layer infrastructure: governed metadata, business glossary, lineage, and institutional knowledge built once and inherited by every agent. The evidence is now concrete. The Snowflake experiment shows +20% accuracy from adding an ontology. The Workday revenue agent could not answer a single question until a translation layer was built. Enterprise onboarding runs 8–24 weeks because teams are manually resolving cold start from scratch on every deployment. These are not model problems. They are cold-start problems with a known architectural solution.
Your agents are not failing because the model is wrong. They are failing because the organization has never been introduced to them.
FAQs about the AI agent cold-start problem
Permalink to “FAQs about the AI agent cold-start problem”1. What is the cold start problem in AI agents?
Permalink to “1. What is the cold start problem in AI agents?”The cold start problem in AI agents refers to two distinct failures. Session cold start occurs when an agent begins each conversation with no memory of prior sessions, forcing users to re-establish context repeatedly. Organisational cold start occurs when an agent has no knowledge of the organization’s data estate — its metric definitions, lineage, governance rules, and institutional vocabulary. Memory tools resolve session cold start; only a context layer resolves organizational cold start.
2. How do you solve the cold start problem for enterprise AI?
Permalink to “2. How do you solve the cold start problem for enterprise AI?”Solving the enterprise AI cold start requires treating the two problems separately. Session cold start is solved by memory layer tools such as Mem0, Zep, or LangMem, which persist conversation history and inject it at session start. Organisational cold start requires a context layer — a governed, machine-readable representation of the data estate including business glossary, lineage, ownership, and metric definitions. Most enterprise failures stem from the organizational layer, which no memory tool addresses out of the box.
3. What is the difference between a memory layer and a context layer for AI agents?
Permalink to “3. What is the difference between a memory layer and a context layer for AI agents?”A memory layer stores and retrieves what happens in agent sessions — conversation history, user preferences, prior decisions. A context layer supplies what the agent needs to know about the organization before any session begins — metric definitions, data lineage, business glossary, governance context, and institutional decisions. Memory solves session persistence. Context solves organizational ignorance. They are complementary but not interchangeable; confusing them leads to investing in the wrong infrastructure for the actual failure mode.
4. Why do AI agents fail in production after working in demos?
Permalink to “4. Why do AI agents fail in production after working in demos?”Agents pass demos because demo environments are hand-crafted to avoid the ambiguities that exist in production data. In production, agents encounter real field names, deprecated tables, ambiguous metrics, and cross-functional vocabulary conflicts that were absent in staging. This is the organizational cold start in action: the agent was never introduced to the organization’s actual data estate. Snowflake’s published experiment showed this gap directly — an agent with full data access still improved accuracy by 20% once an organizational ontology was added.
5. How long does it take to onboard an AI agent in an enterprise?
Permalink to “5. How long does it take to onboard an AI agent in an enterprise?”Without a context layer, enterprise agentic AI onboarding spans 8–24 weeks for initial proof-of-concept phases, with 3–4 weeks consumed by knowledge base development before agents can function. Complex environments add 1–4 weeks further. With a pre-built context layer, this timeline compresses dramatically — one Atlan customer in insurance reduced a projected 12-month build to approximately 1 month. The onboarding timeline is almost entirely determined by how much of the organizational cold start has already been resolved.
6. Can Mem0 or Zep solve the AI agent cold start problem?
Permalink to “6. Can Mem0 or Zep solve the AI agent cold start problem?”Mem0 and Zep correctly solve session cold start — they persist conversation history and inject prior context at session start. They do not solve organizational cold start. Neither tool was designed to supply business glossary entries, governed metric definitions, data lineage, or institutional governance decisions. If an agent fails because it does not know what revenue means in your organization, a session memory tool will not fix that. The failure is in the context layer, not the memory layer.
7. What is the AI context gap?
Permalink to “7. What is the AI context gap?”The AI context gap is the distance between what a language model knows from general training and what an enterprise agent needs to know to function correctly in a specific organization. The gap encompasses the organization’s metric definitions, canonical data sources, data lineage, governance decisions, business glossary, and institutional vocabulary. The context gap is the root cause of organizational cold start — and it cannot be closed by prompt engineering, model upgrades, or session memory tools alone.
8. What is context engineering and how does it relate to cold start?
Permalink to “8. What is context engineering and how does it relate to cold start?”Context engineering is the discipline of designing, structuring, and delivering the right information into an AI agent’s context window so it can reason accurately about a specific domain. It directly addresses organizational cold start by building the systems that ensure every agent receives governed business context at initialisation — metric definitions, lineage, governance rules — rather than requiring manual prompt injection per initiative. Where prompt engineering shapes individual queries, context engineering shapes the knowledge infrastructure agents inherit.
9. How do I give my AI agent knowledge about my company’s data?
Permalink to “9. How do I give my AI agent knowledge about my company’s data?”The most durable approach is building a context layer: a governed repository of business glossary entries, metric definitions, data lineage maps, and canonical field documentation in a format agents can consume at initialisation. In practice, this means tagging and documenting your data estate in a data catalog, defining canonical metrics with owners and calculation logic, mapping lineage from source to consumption, and establishing a feed from that catalog into your agent’s working context. This is what eliminates the manual documentation bottleneck that most teams hit on every new AI initiative.
10. What did Snowflake’s ontology experiment prove about agent context?
Permalink to “10. What did Snowflake’s ontology experiment prove about agent context?”Snowflake’s published experiment (March 2026) added a plain-text organizational ontology — join keys, table grains, cardinality and fanout hints — to an agent already receiving Snowflake semantic views. Final answer accuracy improved by 20% and average tool calls decreased by approximately 39%. The result proved that even an agent with structured data access could not infer organizational relationships from data alone. Explicit context — the kind a context layer provides — was required to end the agent’s organizational cold start.
Sources
Permalink to “Sources”- Snowflake — “The Agent Context Layer for Trustworthy Data Agents” (Josh Klahr and Rajhans Samdani, March 2026): https://www.snowflake.com/en/blog/agent-context-layer-trustworthy-data-agents/
- MIT — “The GenAI Divide: State of AI in Business 2025” (via Fortune, August 2025): https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/
- Gartner — “Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027” (June 2025): https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027
- Gartner — “Lack of AI-Ready Data Puts AI Projects at Risk” (February 2025): https://www.gartner.com/en/newsroom/press-releases/2025-02-26-lack-of-ai-ready-data-puts-ai-projects-at-risk
- Bain — “Executive Survey: AI Moves from Pilots to Production” (November 2025): https://www.bain.com/insights/executive-survey-ai-moves-from-pilots-to-production/
- LangChain — “State of Agent Engineering” (late 2025): https://www.langchain.com/state-of-agent-engineering
- Anthropic — “Effective Context Engineering for AI Agents”: https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents
Share this article
