The holy grail of product launches is giving people something that’s both magical and real. It’s not easy to do well. But when you hit that sweet spot, people notice. Customers become unsolicited advocates and hands start raising to learn more.
That’s what we saw at Atlan Activate 2026, our biggest product launch event (yet). The chat was lighting up with customer experiences:
“I was concerned about that too, but after going through the Context Accelerator pilot, we are fully sold on these Context Agents.”
“We’ve used this and saved hundreds of human hours.”
“The AI gets smarter as it gets more and more verified content – a real gamechanger.”
Real validation from real customers.
That’s so important right now, when context has been reduced to a buzzword. You see it in every vendor pitch, analyst report, and conference talk, but 56% of CEOs still report zero financial benefit from AI. So it’s not surprising that people are questioning whether the context hype for AI is real.
Here’s what I know: In the last decade, the AI industry has compounded intelligence by over 1,000x every six months. In that time, context has compounded by almost nothing. The context your AI agents have access to today is largely the same context that’s been sitting in dashboards, Slack threads, and people’s heads for years.

At Atlan Activate 2026, we took an important step in changing that. It wasn’t more talk about context and what it could potentially do. We swapped the buzzwords and marketecture diagrams for live demos on real data. We showed how to bootstrap, build, and operationalize context across systems. Here are the biggest takeaways.
Context Agents: Solving the cold start problem in days, not months
Permalink to “Context Agents: Solving the cold start problem in days, not months”The question I hear from CDOs most often isn’t “how do I choose the right model?” It’s “we have 500,000 undocumented assets. Where do we even begin?”
This is the context bootstrapping problem, and it’s one of the most common walls companies hit as they move from AI use cases to multi-agent, enterprise-wide systems. You need enriched metadata to get value from AI, but enriching metadata at scale requires humans — and humans don’t scale. The result is a 9-12 month enrichment project where data stewards are stuck documenting tables, while AI initiatives wait.
But what if AI agents could handle the enrichment so humans were only needed to govern the outcomes? We built Context Agents to find out.
In 2025, all of Atlan’s customers combined created 25,000 human-written descriptions for assets in Atlan. In a recent two-week Context Agents Accelerator program, 50 teams created over one million pieces of AI-generated descriptions. With a fraction of the people and 4% of the time, Context Agents increased output by 40x.
During Activate, we showed what Context Agents can do. The system comprises nine AI agents across three context tiers: foundational, derived, and compounded. Each agent handles a different aspect of documentation, from analyzing query history and access patterns, to resolving metric definition conflicts, to mapping what every term means in every context.

Adrianna Clark, Senior Data Governance Analyst at Engine, shared her experience in the Context Agents Accelerator program, admitting at first that she was skeptical. She’d just launched Atlan and had a full company rollout to manage, but no dedicated data stewards available for manual documentation.
“I was definitely nervous going in. I thought that there might be some gaps in the definitions,” she recalled. “But once I went through it, it was very accurate. It was already pulling out our most used assets and it had collections ready for me. It was very efficient, and I was shocked by how fast it was.”
The efficiency and accuracy surprised Adrianna, but what won her trust was something more specific: agents were automatically surfacing the business questions real users had been asking about the data, pulling from years of SQL query history. “This is the type of context we need for our agents. It definitely enabled a sense of trust for me,” she explained.
Her broader observation is the one I keep coming back to:
“For years, we’ve been taught that in order to document human tribal knowledge, it has to be a very manual process. What I experienced with Context Agents is that you can now lead with technology. I was able to run this entire exercise on my own. The old model assumes you have the headcount, the people, the capacity. When you don’t, this makes it possible.”
— Adrianna Clark, Senior Data Governance Analyst, Engine
That’s what it looks like to break through the cold start problem. Capacity is no longer a ceiling because Context Agents deliver accurate documentation that the business can rely on.
Adrianna’s experience demonstrated that the constraint was never intelligence. It was always context infrastructure. Watch the full recording.
Why AI pilots fail to scale — and what Context Engineering Studio fixes
Permalink to “Why AI pilots fail to scale — and what Context Engineering Studio fixes”Let’s say your context layer has fresh, accurate asset descriptions. Now what? How do you use it to go from raw assets to a trusted, production-ready AI agent?
This is a problem most teams haven’t solved. Organizations using context-aware systems report 94-99% accuracy, but those without grounded context hover at just 10-31%.
The problem isn’t the model, but rather what the model has to work with. It’s hard to know how much context is enough, and when to move from testing to production. As one leader at a global lifestyle brand told us: “We’ve tested our customer service analytics chatbot for a month and a half, and no one feels like they’re at the end of testing. You can’t test an infinite number of questions.”
We built Context Engineering Studio to break through the trust and readiness barrier, bringing scattered knowledge to production-ready agents in days, not months.
The test: Raw context vs. Atlan’s context layer
Permalink to “The test: Raw context vs. Atlan’s context layer”During Activate, Atlan Staff Engineer Anirudh Agarwal showed a live build inside Cursor, with two MCP servers connected: Atlan’s MCP vs. Claude-managed agents. Anirudh pasted in a single natural language prompt to build a customer support agent. Rather than manually defining what the agent needed, the framework pulled automatically from Atlan’s context layer. Without manual intervention, Atlan:
- Found the right tables
- Located the SOPs stored in Atlan’s knowledge folders
- Built skill files dictating how the agent should act
- Generated semantic models so the agent understood which columns to query and why
- Created the identity document that made the agent specific to a company, including operating principles and follow-up behavior
- Deployed the agent into production
The side-by-side comparison made the stakes concrete. The customer service agent running on raw context and the one running on Atlan’s context layer were given the same customer complaint. The raw agent gave a generic response. The Atlan-powered agent got the facts right, and responded and followed up appropriately, providing a helpful, personalized experience. The difference? All the skills, tools, and knowledge lived in Atlan’s context layer, not in a prompt someone had to write and maintain.
This matters for AI builders because the demo ran entirely in Cursor, the IDE they already work in. There’s no new system to learn, manual prompt engineering, or brittle hardcoded instructions that break every time the underlying data changes. Atlan’s context layer is accessible directly from the tools builders already use, making context engineering a practice instead of an ad hoc responsibility.
Avoiding testing hell
Permalink to “Avoiding testing hell”Before any agent goes to production, Context Engineering Studio converts your dashboards and historical SQL into automated test suites. These include hundreds of simulated questions, pass/fail rates, and specific context gaps identified before they reach a user. Instead of spot checks and intuition, this provides you with an eval framework that tells you exactly where your context is failing and what to fix.
For Jessie Buelteman, Senior Manager of Enterprise Data Governance at Elastic, this visibility is essential for building scalable talk-to-your-data use cases inside the company’s finance organization. Accuracy requirements are non-negotiable: a wrong answer goes back to the data team, and that compounds at scale.
“Our assets live across multiple tools. We don’t want lock-in. We need a solution that can work with us and scale as technology evolves and use cases pivot,” he explained. “The data layer is the context foundation. That’s going to make these raw facts really interpretable and usable by any machine without any human intuition.”
Instead of constantly hard coding prompts, Jessie’s team now has all their context “in the data layer, where it belongs.” The next step is ensuring that the context is versioned, validated, and portable across every team that needs it, when they need it.
That’s where Context Repos come in: version-controlled, bounded, and portable units of business context that work the same way GitHub repos work for code. Build, validate, get human certification before anything ships, following the same review and approval infrastructure already used for SQL models.

Jessie reflected on how this benefits his team: “It provides users with self-serve capabilities with confidence. It’s not like they ask a question and then it comes back to the data teams to validate. That alleviates a huge amount of work.”
“Any team, any tool, same answer is what we want to try to drive.”
— Jessie Buelteman, Senior Manager of Enterprise Data Governance, Elastic
The bar isn’t simply whether a solution is good enough for most users. The standard should be delivering the same answer for every user, regardless of which tool they’re in or which team they’re on.
The agent sprawl problem and the architecture that fixes it
Permalink to “The agent sprawl problem and the architecture that fixes it”Every AI tool you deploy today — Snowflake Cortex, Databricks Genie, agents built in Claude or Gemini — rebuilds context from scratch. The same definitions get created sixteen different ways across sixteen different systems. Every new tool requires another context rebuild. That is agent sprawl, and it’s the same mistake we made with BI sprawl in the data warehouse era.

The Context Lakehouse avoids that. It’s an open, Iceberg-native architecture where context lives and is accessible by any execution engine via MCP, API, or SQL. We saw 8 billion reads in 90 days, all from agents and MCPs reading shared context across every system in the enterprise. Each one of those reads represents a query that didn’t require someone to rebuild context for a new tool, team, or use case.
We recruited our partners at Cyera and Immuta to drive this home. Cyera’s data classification engine flows directly into Atlan at the column level. When a column is flagged as containing PII, that sensitivity context is automatically wired into every downstream AI workflow. Agents inherit it — they don’t have to infer it. Immuta’s dynamic policy enforcement ensures that the context agents are working with is governance-aware at the point of consumption. Both integrations, as well as many others, are live in Atlan’s Open Context Ecosystem.

This is what it means for context to be portable. Context should be your institutional knowledge, not beholden to a specific vendor. The enterprises that build a context layer on top of a proprietary system are betting that system will always be the right one. But since the AI tool landscape changes every six months, that bet is hard to justify.
It’s time to start compounding context
Permalink to “It’s time to start compounding context”Here’s what stayed with me after we wrapped: participants in the chat stopped asking whether a context layer is real, and started asking real, practical questions about operationalizing it.
What people are asking now is how fast they can integrate a context layer, and what it takes to keep it useful. Every agent interaction generates a trace, patterns surface automatically, and the gaps close. Agents get smarter without the model changing. That’s what it looks like when context compounds.
We’ve spent enough time compounding and commoditizing intelligence. It’s time to focus on the context. That’s what will determine which enterprises actually win with AI in the next decade.
The companies doing this work right now — running Context Agents, building in Context Engineering Studio, grounding their semantic layers in shared, portable context infrastructure — will be a year ahead when the next set of models arrives. Context doesn’t depreciate the way models do. Every piece of context you build belongs to your organization and compounds over time.
We spent years describing this. Now it’s real, and ready for you to see.
If you missed the live session, catch the full recording here.
Share this article