I don’t know how many of you have watched Inside Out as an adult. Both parts. I love them. Pixar has a way of naming things the rest of us feel but can’t quite articulate — and the thing Inside Out named, the thing that stayed with me long after the credits, is this: all emotions are valid. Joy spends the entire first film trying to keep Sadness out of the control room. She sidelines her, works around her, shuts her out. And it makes everything worse. The breakthrough doesn’t come from managing emotions better. It comes from finally understanding what each one is trying to tell you.
I’ve been thinking about that a lot lately. Because over the past few months, working at the intersection of AI and enterprise data, I’ve watched the same pattern play out in team after team. People who genuinely want to use AI, who believe in it, and who have everything they need to get started. And yet, something stops them. Or slows them down. Or makes them endlessly prepare instead of begin.
The finger often gets pointed at the technology. But in reality, it’s a feelings problem. And the feelings have names.

The metadata supply problem that never got solved
Permalink to “The metadata supply problem that never got solved”For as long as I’ve worked in data, the metadata supply problem has been the same. You know which assets matter. You know which tables your analysts rely on, which dashboards leadership checks every Monday. You also know that almost none of them is properly documented. No descriptions. No context. No record of what they mean or how they’re used.
So teams do what they’ve always done: ask the data stewards, schedule the sessions, build the templates, and follow up. Months pass. The documentation that does come in is uneven — some assets lovingly described, most still blank. The people who were supposed to write it had other priorities. Documentation was the thing they were supposed to do on top of their real work.
For most organizations, meaningful metadata enrichment has taken 9 to 12 months, if it ever happens at all — a pattern Gartner confirms across industries, with 63% of organizations still lacking the data management foundations their AI initiatives need. It’s not that people didn’t care, but because they were being asked to do something at a scale and consistency that humans were never built for. The context existed in SQL, query logs, and the heads of analysts who’d been with the data for years. The bottleneck was always the pipeline to surface it.
I’ve spent five years at Atlan watching this problem. I’ve sat with data leaders across industries who described the same frustration in different words. And I’ll be honest: for a long time, we didn’t have a real answer either.
Context Agents: A different way forward
Permalink to “Context Agents: A different way forward”To get to the heart of the problem, we started by rebuilding Atlan’s metadata infrastructure from the ground up as the Context Lakehouse, built on Iceberg. Foundation models crossed a capability threshold. This allowed us to build Context Agents: AI that reads your entire enterprise data graph and harnesses context in a way no human could.
Context Agent Studio is Atlan’s capability for AI-led context generation, which is one of the walls we’ve seen most companies hit as they scale AI use cases. It harnesses the context already available in your data estate — data lineage, SQL transformations, query patterns, your existing metadata — and uses them to generate your missing context at scale. Descriptions, README documentation, SQL intelligence (popular business questions, joins, filters and more), metric glossaries, domain tagging, linked assets, and more.
When we saw what was possible, we wanted to give it in the hands of people who shared the vision — enterprise teams that had been living with the metadata supply problem for years and were ready to try something fundamentally different. We opened early access to a select group and ran Context Agents Accelerator: two weeks, real data, real outcomes, and a lot of real feelings along the way.
What I want to share isn’t a product announcement. It’s what that experience taught me about the human side of using AI — specifically, the three emotions that show up when people start doing something genuinely new with technology they haven’t fully trusted yet.
Trust, fear, and control: The three emotions that AI raises
Permalink to “Trust, fear, and control: The three emotions that AI raises”Going back to Inside Out: these aren’t emotions to fix or suppress. They’re signals that show up because you care about quality, you’ve built things that matter, and you take your work seriously. The question isn’t how to make them go away. It’s how to hear what they’re telling you, and reframe the response.
😨 Trust
Permalink to “😨 Trust”“How can AI generate better descriptions than my human analysts who have so much experience?”
This one makes complete sense. Your analysts know things no machine could — the history, the politics, the edge cases that never made it into any documentation. The tribal knowledge that lives in people, not systems.
AI isn’t competing with your analysts’ knowledge. It’s reading everything that knowledge has already produced — the SQL they wrote, the lineage they built, the queries they ran thousands of times — and synthesizing it across every asset at once. It doesn’t replace the expert. It scales what the expert already created.
“I expected boilerplate. Instead, it inferred business context I never explicitly provided, correctly describing how an asset fit into our customer journey just from column names and lineage. That’s when I realized this was knowledge synthesis, not just documentation.”
— Swatilekha Saha, Technical Program Manager, DAT Freight & Analytics
What I’ve seen, every time a team actually runs context agents on their data, is that the trust question answers itself. One team in our cohort systematically rated every piece of output — over 23,000 descriptions — and found that 70% were immediately rollout-ready. Four hundred were so good the team flagged them as the standard they’d want human-written descriptions to meet. They came in skeptical. The output changed the conversation.
“We were stunned by the quality of the content produced by these AI agents. How could it create such high-quality context from lineage, from SQL logic, from dbt logic? It really shows how much business information is hidden in metadata that we simply cannot see with human eyes. But agents pick it up, consume it all, and surface it in an organized way.”
— Kenneth Jebjerg, Data Analytics & Governance
Trust is earned through evidence, not persuasion. The fastest way past this emotion is to show someone the output on data they know. Research on AI trust reaches the same conclusion: adoption stalls not when the technology fails, but when teams hold AI to a standard of perfection instead of comparing it to the real alternative — which is usually blank.

😰 Fear
Permalink to “😰 Fear”“If I roll this out and something isn’t accurate, I’ll lose credibility with my team.”
This one I felt personally. When I was building Hermione — an internal AI agent we built at Atlan to surface account health signals for our field team, I was genuinely afraid to show it to anyone. The output was useful, and I knew it. But instead of rolling it out, I quietly pulled two colleagues aside and asked for private feedback first. I didn’t want to be the person who shipped something broken and became a cautionary tale. No one does.
One of them said something I’ve thought about almost every day since: “Nandini, you don’t know that you’re competing with us not having anything.”
That was the whole reframe. We weren’t comparing AI output to perfect human documentation. We were comparing it to blank. And blank is a very low bar to clear.
The downside of rolling out AI-generated context — clearly labeled as such — is that some descriptions will be imperfect. The downside of not rolling out is that your users, both humans and AI agents, keep navigating blank assets with nothing to go on.
The teams that overcame this fear fastest were the ones who stopped asking permission and started showing results. One team enriched tens of thousands of assets and briefed their leadership the same morning they saw the output. The leadership was immediately receptive. The fear of the conversation, it turned out, was bigger than the conversation itself.
“Allow yourself to trust the process. If you do that a bit earlier, it could be surprising for you.”

😬 Control
Permalink to “😬 Control”“I need to review and verify every single description before anything goes live.”
This is the most understandable emotion of the three, and the hardest to let go of. When you’ve spent years caring about the quality of your data, the instinct to review everything before it reaches your users isn’t obstruction. It’s integrity.
Instead of asking “how do I verify a million descriptions,” the question becomes “how do I build a system where quality is maintained at scale?” You move from reviewer to architect. The unit of work changes, and suddenly the thing that felt impossible — enriching everything — becomes a design task instead of a manual one.
One team in our cohort started the program with a firm position: they wouldn’t release AI-generated content to their users, and they weren’t going to run it in production. By the end of the two weeks, they had enriched over 25,000 assets and were one of the loudest advocates for AI-generated context in the group. The product didn’t change. Their perspective did — shaped by watching peers they respected go through the same journey and come out the other side with more confidence than they’d walked in with.
“With single-digit hours of effort from our team, we were able to accomplish work that would have taken months for a larger team to finish — and realistically, we likely would not have ever started it.”
— Lexie McGillis, Senior Manager, Data Governance
But the bigger shift wasn’t just what they built, but where it took them. When teams brought this context back to their wider organizations, the conversations changed entirely. They weren’t talking about documentation anymore. They were being asked: how do we make this the nervous system of our org? What does our context layer strategy look like at scale? How do we feed this into the agents our engineering team is building?
The governance teams and stewards who had spent years being asked to document things found themselves being asked to architect something. That’s the agency waiting on the other side of control: not just doing less manual work, but stepping into a more strategic role at the moment it matters most.
The goal isn’t to review everything. It’s to build a system you trust — one that creates the capacity to work on high-value problem statements instead.

The tipping point for Context Agents
Permalink to “The tipping point for Context Agents”Here’s the number I can’t stop thinking about.
In all of last year, across all our customers combined, roughly 25,000 human-written descriptions were created on assets in Atlan. That was the aggregate output of governance teams working hard, in parallel, across hundreds of organizations.
In the three weeks since we started the Context Agents Accelerator, a single cohort of 50+ teams crossed one million AI-generated pieces of metadata. What took an entire year of human effort was surpassed inside three weeks, by a fraction of that group.
1M+
Description updates
15K+
README updates
25K+
SQL Intelligence runs
110K+
Hours of manual work saved
What Context Agents generated in a few days would have taken years of manual human efforts. These weren’t descriptions that were going to get written eventually. They were never going to get written — not because no one cared, but because the manual model had a ceiling. AI removed the ceiling on what the humans could accomplish.
One participant described it this way: “I was intimidated about this whole initiative. I wasn’t sure if it was feasible or if we’d have something presentable by the end of year. This gave us a huge leap forward. The AI just really enables us to do things we wouldn’t be able to do otherwise.”
Another said their team’s attitude “changed instantly” the moment they saw what the SQL intelligence surfaced about how their data was actually being used, and they quickly discovered that it could be a force multiplier.
Give up control. Retain agency.
Permalink to “Give up control. Retain agency.”After watching this up close, here’s what I know: the people who have been building context layers in enterprise data — the governance teams, the stewards, the people who have been documenting data thanklessly for years — are about to become some of the most strategically important people in any AI-forward organization.
Not in spite of AI, but because of it. McKinsey’s research consistently shows that organizations with strong data and governance foundations extract significantly more value from AI than those without — the work governance teams have been doing for years, often without recognition, turns out to be exactly what AI scale requires.
When AI takes on the production work — the descriptions, the SQL intelligence, the READMEs — two things happen at once. The pressure to personally document everything lifts, and what gets built becomes the foundation not just for your human users, but for every AI agent your organization runs. One shared context layer serves both.
But here’s what I really want you to hear. You already know the conversations you haven’t been able to have — with the AI team, with engineering, with leadership. They’re the conversations about what your data actually means, what it would take to make your agents trustworthy, and why the context layer that could become your organization’s most valuable asset. You’ve had the expertise to be in those rooms for years. You just haven’t had the bandwidth.
When AI removes the documentation ceiling, it doesn’t just free up hours. It frees up your role. Governance teams are moving from writing descriptions to architecting the context layer that feeds their company’s entire AI strategy. That’s a seat at the table that didn’t exist two years ago.
The question is whether you’re willing to let go of the work that’s keeping you from being in the room.
Joy wins when you start
Permalink to “Joy wins when you start”At the end of our two weeks together, I shared a photo with the group: 32 people in a garage in Menlo Park in the 1970s. It was the Homebrew Computer Club. Hobbyists, tinkerers, people who believed something was possible before the world had agreed on it. Steve Jobs and Steve Wozniak were two of the 32. That group laid the foundation for personal computing.
We are in a similar moment. AI is doing to knowledge work what personal computing did to individual productivity: not replacing the humans, but fundamentally changing what a single human can accomplish. McKinsey estimates generative AI could add $4.4 trillion annually to the global economy — and knowledge workers, especially those who work with data and context, stand to see the biggest shift. And the people who have been building context layers for years, often without recognition or the tools to do it at scale, are exactly the people positioned to lead what comes next.
Going back to Riley and her emotions: Fear, Trust, and Control will show up when you start doing something that matters. They showed up for me with Hermione. They show up for every team I’ve watched go through this. They will show up for you. That’s not a warning. That’s just what it feels like to do something real.
The answer isn’t to wait until they go away. The answer is to name them, understand what they’re telling you, and start anyway.
Joy wins when you start.
A huge thank you to the Context Agents Accelerator cohort: you came with a vision, you brought the energy, and you made this real.
Nandini Tyagi is one of the founding members of Atlan’s customer experience team and today leads Strategic Initiatives focused on how Atlan’s AI products are launched and adopted across customers. She works closely with data leaders and helps them think through moving from AI strategy to implementation, and builds and writes about what it actually takes to make AI stick inside large, complex organizations.
Share this article