Are Dashboards Dead? The 5 Factors Leaders Can't Agree On (And What That Means for You)
When we asked five data leaders point-blank whether dashboards are dead, the room split instantly.
One thumb up. Multiple thumbs down. A lot of sideways glances.
That reaction captured something that goes beyond a simple yes-or-no question. It surfaced the tension shaping every data leader’s 2026 roadmap:
Are we witnessing the end of dashboards, or just another hype cycle about AI analysts that will fade like so many “data democratization” promises before it?
At The Great Data Debate, we brought together Cindi Howson of ThoughtSpot, Barry McCardel of Hex, Tristan Handy of dbt Labs, Chris Child of Snowflake, and Shubham Bhargav of Atlan to hash it out. What emerged wasn’t consensus, but something more valuable: a map of where the real fault lines are, and what that means for the future of AI analytics.
The real problem behind the AI analyst hype
Permalink to “The real problem behind the AI analyst hype”Before diving into the fate of dashboards, the panel tackled a more fundamental question: What problem are enterprises actually trying to solve with AI analytics?
The terminology—talk to data, conversational analytics, AI analysts—matters less than understanding what problem you’re solving. And that problem, according to Cindi, is universal. “It really is speeding up that insight-to-action loop,” she said. “We’ve been working on this as an industry for decades.”
But organizations are at wildly different starting points. Some are just now embracing natural language interfaces as a way to speed up insights. “They’re afraid that this might open up, as somebody said to me this week, Pandora’s box,” Cindi acknowledged. “They don’t want the CEO or business users to be able to ask their own questions.”
But the more mature organizations? They’re already using conversational analytics to not just query metrics, but to know why something happened and what to do next.
Barry McCardel framed it differently: “Dashboards raise more questions than answers. And 80% of the interesting things you want to do with data really don’t fit into that 2D thing.”
The insight-to-action loop. Questions that raise more questions. The gap between what executives request and what users actually need. These are the real challenges—not whether you can chat with your data, but whether that chat actually helps you make better, faster decisions.
Dashboards aren’t dead – they’re specializing
Permalink to “Dashboards aren’t dead – they’re specializing”Let’s get to the question on everyone’s minds: Are dashboards actually going away?
Tristan Handy laid out a framework for thinking about dashboard survival in two parts: “One is: I need to curate this thing that I want the whole company to reference. It’s part of a business process, highly traversed, consistent and predictable. The other is dashboards have historically been created as part of ad hoc analysis, and I think that part is largely going away.”
The split is this: mission-critical dashboards live, ad hoc analysis dashboards die.
But Cindi Howson wasn’t entirely convinced by this clean bifurcation. “The definition and how the dashboard gets created is key. A starting point for KPIs,” she said. “But do we need to take weeks to create it? Can something generated from a conversational interface then become a product of it?”
In other words: What if the future isn’t dashboards vs. AI analysts, but AI analysts that create dashboards on demand?
Chris Child brought it back to fundamentals with a car dashboard analogy. Just like you need your speedometer and fuel gauge in a consistent, reliable place—not generated fresh each time you start the car—certain business metrics need the same treatment.
“You need to be able to say, this is the metric I care about, define it, and lock it in somewhere,” he argued. “Traditionally that was in the BI tool, but it should sit a layer lower.”
The implication: Whether AI generates it or humans build it, some metrics are too important to be ephemeral. They need to be defined once, governed tightly, and displayed consistently.
But Barry McCardel challenged that neat categorization: “There’s a lot of stuff that got wedged into dashboards that didn’t really belong there. Dashboards were the only place people with access to data could publish anything.”
He envisions dashboards becoming a subset of what you can build—“super democratized” interactive experiences, not the catch-all for everything analytics.
The takeaway? Dashboards aren’t going away—they’re becoming more specialized. They’re for the metrics that matter so much you need them locked down, versioned, and visible to everyone. Everything else? That’s moving to conversational interfaces.
As Tristan observed: “The interface whereby that dashboard is created is not the best way to do ad hoc analysis. Conversational interfaces are better. ChatGPT interaction is actually very similar to a Jupyter notebook interaction, just with natural language.”
The accountability question: Who owns the answer?
Permalink to “The accountability question: Who owns the answer?”In a dashboard world, accountability is crystal clear. The data team builds it, the business consumes it, and when something’s wrong, everyone knows who to call.
In an AI analyst world, it’s much murkier.
Tristan proposed a radical framework: bifurcate your governance and put accountability for low-stakes answers on the users themselves.
“Most quantitative questions that users have internally are actually in this category where it’s not catastrophic if they’re a little wrong,” he asserted. “And in those cases, the business user, whoever’s asking the question in the first place, they’re fully responsible for their answer.”
That works for 90% of questions. The other 10%? “That needs to be highly governed. You have to make sure that your agents use these definitions as opposed to trying to kind of cook things up from scratch.”
Cindi Howson wasn’t buying the 90/10 split entirely. She referenced a conversation with a client in the defense space: “Directionally accurate is fine if I’m asking about movies and book recommendations. If I’m trying to make a decision about where to drop a bomb, I better get that really accurate.”
The tension here is real and unresolved: How do you build systems that are fast and flexible for low-stakes questions while remaining rigorous for high-stakes decisions? And who decides which bucket each question falls into?
Shubham Bhargav showed why even categorizing questions is harder than it looks. If you ask an AI analyst to name the top 10 customers for a specific region, it needs you to define the metric: Is it based on revenue? Consumption? Retention? The Product team will likely have a different answer than the Finance team.
A human analyst would simply ask which metric you meant. AI can’t. That’s the accountability problem in a nutshell: When the system can’t ask for clarity, who’s responsible when it guesses wrong? The business user who asked? The data team who built it? The AI itself?
At the end of the day, the problem isn’t technology—it’s context.
What happens to the data analysts?
Permalink to “What happens to the data analysts?”If AI can analyze data, what happens to data analysts?
Barry framed it as evolution, not extinction: “Good analysts, good people in those types of dedicated analytics roles, are really storytellers. They understand their audience. They understand the context. Something tells me that AI is not going to supplant these entire roles in the way we think. They’re just going to evolve.”
The shift is going from answering questions to enabling AI to answer questions. Teams with mature AI programs are already reaching that point. Cindi shared a concrete example from one of ThoughtSpot’s customers.
“Dashboard analysts and core teams have offloaded the simpler questions,” she explained. “Data teams have been elevated—retrained as GenAI application builders to solve more sophisticated problems.”
But Tristan identified an emerging role that will be even more critical: the analytics engineer.
“There’s a lot of context that isn’t about your data, it is about your business. And the question is, who is going to encode that and how are they going to encode it? I think it goes into a more general framework like the skills framework that Anthropic launched recently, because that context needs to be accessible not just to data products, but to every agent interface in your company.”
The implication: Data teams might spend less time building dashboards and more time building the semantic and contextual foundations that make AI agents actually useful. Which raises the question Cindi posed:
“We thought the analytics engineer would be the sexiest role of the 21st century. Is it now really an AI whisperer or an AI coach?”
The semantic layer is back (but it’s not how you remember it)
Permalink to “The semantic layer is back (but it’s not how you remember it)”If there was one point of near-consensus, it was this: semantics matter more than ever. But what “semantic layer” means has fundamentally changed.
“Semantic layers may be back, but not in the way we expected,” said Shubham. “In the AI world, you need much more than semantics. You need business context. You potentially also need data context. It is a context layer now.”
Chris Child doubled down on this, comparing context for AI analysts to onboarding processes for human analysts.
“When you point LLMs at structured data, you have to still give them the same information that you would give to an analyst,” he explained. “You don’t just hire an analyst and throw them into the business. The context about the business is incredibly important.”
Chris pointed to a key architectural shift: semantic information used to sit in BI layers but is now moving to an Open Semantic Interchange: “The idea is how do we centralize that and make sure that you have one definition that AI can use, that Hex can use, that ThoughtSpot can use, that everyone can integrate with.”
But Barry challenged the big-bang approach. Creating a semantic model strategy and rolling out a large-scale implementation “feels very brittle,” he argued. Instead, he asked: “How can we help build a system to get smarter the more you use it?”
Shubham agreed: “There’s no 100% coverage of semantic layer or a context layer on day zero. It’s a continuous activity. You start with your benchmark, you start with your bare minimum. You have context feedback loops where you find the gaps. You plug that gap, you potentially put that back as a test case. And you just reiterate on that context layer.”
The shift from static semantic models to dynamic context layers isn’t just architectural — it’s philosophical. It recognizes that business meaning evolves, and the infrastructure supporting AI needs to evolve with it.
The takeaway: The semantic layer isn’t a project — it’s a practice. And it’s not just about metric definitions anymore. It’s about encoding how your organization actually thinks, and continuously refining that understanding alongside the business.
Build vs. buy: The reality check leaders need to hear
Permalink to “Build vs. buy: The reality check leaders need to hear”Should enterprises build AI analysts themselves or trust vendors to get it right?
Chris Child’s answer was both – but with a catch: “The companies that we’re seeing are getting the most value out of building AI agents and offloading work from analytics teams onto these agents are the teams that have a good foundation in place. Having a good foundation in place hasn’t changed dramatically in the world of AI. It’s just the bar’s gotten a bit higher.”
What exactly is a “good foundation”? Data that flows into or is queryable from one place. Trustworthy models. Strong governance.
But Barry McCardel pushed back on waiting for the perfect foundation. The traditional approach of having a big, one-time implementation doesn’t work when the space is moving so fast.
His counter-argument? Build systems that get smarter through use, not upfront perfection. Don’t wait for comprehensive governance before you start learning what actually works.
Chris agreed with the need to move fast, but stressed the importance of creating safe boundaries for experimentation: “Make sure you’re creating space for people inside your company to play with these new latest things and try them in a safe way. We have a meeting every two weeks now where anyone on the product or data science team gives demos of what they’re doing with AI. And half of the stuff is not stuff I would have ever thought about doing, but people are doing it already and it’s working well.”
Barry’s closing advice cut through the entire debate: “There’s a lot of buzz right now. People with predictions of where things are going to go in a couple years. No one really knows. I would focus on trying to adopt solutions that are working today. It’s easy to get caught up in a lot of the marketing hype. At the end of the day, what’s actually working?”
The tension remains unresolved: Do you build foundations first, or do you learn through doing? The answer, frustratingly, might be both. And that means most organizations will need to find their own balance between structure and speed.
The hot take roundup
Permalink to “The hot take roundup”We closed the debate with a rapid-fire round of closing arguments. Here’s what our panelists had to say:
Cindi: “AI analysts should be part of your data team. And for those who are still sitting on the fence about this change, I’m going to end with a quote that the president of Cisco shared in Davos two weeks ago: ‘If you don’t like change, wait until you experience irrelevance.’”
Barry: “Seek realness and authenticity when you’re interfacing with all of this content and vendors, more than getting caught up in the hype cycle.”
Tristan: “The thing that I’m most focused on right now is the role of the analytics engineer. I think analytics engineers have this massive, massive role to play in the future of agents.”
Chris: “Remove as much friction as you possibly can. It takes work, but do that work. Going and trying these things—you’ll be amazed at what’s possible today and it will help you to find those rough edges.”
Shubham: “It’s not about ‘will dashboards die or not?’ and ‘who will own context?’. It’s about reimagining how the world works. You need strong processes around benchmarking. For critical use cases, you should know how good or bad your analyst is. If it is 80% accurate but 50% consistent, I don’t think it will do the job.”
My takeaways?
- AI analysts should become part of the data team, taking over routine questions so humans can focus on higher‑value work.
- The critical future work is encoding business context (not just data semantics) so all agents and interfaces can use it, with analytics engineers playing a central role in this.
- Because no one truly knows the end state, organizations should prioritize solutions that actually work today and learn from real usage, rather than chasing hype and long‑range predictions.
- Winning with AI requires a strong data/semantics foundation plus intentional space for teams to experiment and demo real AI use cases, so the org discovers what’s genuinely valuable.
- We need to reimagine workflows around where context comes from and how it’s used, anchored by rigorous benchmarking so AI analysts are accurate and consistent on the most critical decisions.
The debate retrospective: What changed?
Permalink to “The debate retrospective: What changed?”We polled the audience at both the start and end of the session: “By 2027, what happens to dashboards?”
The winner both times? At the start, 56% said dashboards will be “one of several interfaces, alongside AI analysts.” At the end, 55% said the same.
After 30 minutes of debate, the needle barely moved. If anything, that stability tells us something important: The future isn’t either/or. It’s both/and.
Dashboards aren’t dead. They’re just not the only game in town anymore. And AI analysts aren’t magic—they’re tools that work when you’ve built the right foundation of governance, semantics, and context.
The strategic question for 2026 isn’t whether to bet on dashboards or AI analysts. It’s whether you’ve built the foundations—the governance, the semantic layer, the context infrastructure—that makes either one actually useful.
Because as this panel made abundantly clear: The technology is the easy part. The hard part is teaching your systems what your business actually means.
Want to hear the full debate? Watch the recording to catch all the spicy takes, technical deep-dives, and moments where the panelists definitely didn’t agree.
Ready to see these concepts in action? Save your seat for Atlan Activate to learn from data leaders building the foundations that make AI analysts actually work in production.
Share this article
Atlan is the next-generation platform for data and AI governance. It is a control plane that stitches together a business's disparate data infrastructure, cataloging and enriching data with business context and security.
AI Analysts & Dashboards: Related reads
Permalink to “AI Analysts & Dashboards: Related reads”- What Is an AI Analyst: Definition, architecture, and use cases
- Conversational Analytics: How natural language interfaces are changing BI
- Context Graph vs Knowledge Graph: Key differences for AI
- Who Will Own the Context Layer: Data teams vs. AI teams debate
- Context Layer 101: Why it’s crucial for AI
- The Great Data Debate 2026: Watch the full recording
