There's a 2.5x Gap in AI Revenue Satisfaction – And It Has Nothing To Do With the Models

TD Sarma profile picture
Director of Analyst Relations
Published:03/25/2026
8 min read

Key takeaways

  • AI ROI leaders invest 60% on foundations — data quality, governance, and people — at a 1.78× foundations-to-tools ratio.
  • Top AI spenders show 70% revenue satisfaction vs. 28% among the lowest — a 2.5× gap tied to foundational investment.
  • Data quality is the top inhibitor at every spending level. More AI on a broken foundation amplifies problems, not outcomes.

In 2024, two in five organizations had deployed AI. By 2025, nearly three in five. Today, four in five are increasing their investment.

That’s one of the fastest adoption curves any enterprise technology has traced. At the Gartner Data & Analytics Summit in Orlando this March, it was the opening number — delivered by analyst Adam Ronthal as a signal of momentum, commitment, and organizational conviction that AI is the defining strategic bet of this era.

Then came the number that matters more.

Rita Sallam, Distinguished VP Analyst and Gartner Fellow, revealed that only 1 in 5 AI investments show measurable ROI. And a recent Gartner report found that one in eight organizations say GenAI tools “aren’t likely to live up to promises” — arguably a harder verdict than just not showing ROI.

More spending. Less return. That isn’t a late-cycle correction. It’s a structural problem, and it has a specific cause.


The Foundation Inversion

Permalink to “The Foundation Inversion”

Call it the Foundation Inversion: organizations systematically overinvest in AI tools and underinvest in the infrastructure on which those tools depend.

It’s understandable behavior. Tools are visible. They have demos. They show up in vendor presentations with compelling stories about what they can do — once your data is ready.

That qualifier is doing enormous work. And most organizations are ignoring it.

A Gartner study published in March 2026, co-authored by Sallam’s team, puts a number on the gap. Organizations most satisfied with their AI outcomes invest at a 1.78× foundations-to-tools ratio. Roughly 60% of their total AI spend goes to data quality, governance, and people — not platforms.

Their least satisfied peers flip the equation, with the majority spending on tools, and the minority spending on the infrastructure on which those tools depend.

Foundation Inversion: Satisfied vs. Unsatisfied AI Spenders

AI ROI leaders allocate nearly twice as much to foundations as to tools. Their least satisfied peers do the opposite. Source: Atlan

“Organizations most satisfied with AI outcomes invest approximately 30% more in data, governance, and talent than their least satisfied peers — and they invest nearly twice as much in those foundations as in AI technology.” — Gartner, March 2026

This isn’t a marginal difference. It’s a structural one. The organizations getting AI to work aren’t running better models. They’re building the layer underneath the models first.

The Gartner data also tracks the investment trajectory: organizations are on track to double their median AI spend between 2024 and 2026. Doubling spend on tools doesn’t double outcomes when the underlying foundation is broken. More budget flowing into the wrong layer just amplifies the problem.


The real inhibitors aren’t what you think

Permalink to “The real inhibitors aren’t what you think”

When Gartner surveyed D&A leaders on what’s slowing them down, the answer wasn’t “we don’t have the right model” or “our AI platform isn’t capable enough.” Across every spending level, the same inhibitors appeared — and none of them were about the tools.

Data quality issues were the single biggest blocker. That’s because AI doesn’t create data quality problems; it exposes and amplifies the ones that already exist. A model that ingests poorly governed, ambiguously labeled, or contextually underdefined data will produce confident, fluent, wrong outputs. The better the model, the more confidently it produces those wrong outputs.

Our own survey of 550+ data leaders validates the data quality issue in AI deployments. Nearly 40% of leaders cited it as the top obstacle to scaling AI.

Skills gaps came second. AI readiness in most organizations is lowest in people and talent — not platforms. The internal capability to govern, contextualize, and maintain the underlying data often isn’t present. Organizations have acquired tools they can’t fully use because the foundational competency to support them wasn’t built first. In our survey, 31% of data leaders reported skills gaps as a significant blocker to AI scalability.

Budget instability compounds both. According to Gartner, 45% of D&A leaders say reallocation and cuts prevent them from delivering on AI commitments. Tools get funded based on excitement. Foundational work — slower, less visible, harder to demo to a steering committee — gets cut when the quarterly priority shifts. In fact, Gartner found that while AI agent spending is rising year over year, the composition of that spend hasn’t changed.

This is the Foundation Inversion playing out in real organizations: tools get funded on the upswing, foundations get defunded on impatience, and ROI never materializes. Then the cycle repeats with the next tool.


Where the ROI actually shows up

Permalink to “Where the ROI actually shows up”

The organizations that escape this pattern get measurably different results.

Among the highest AI spenders, 70% report satisfaction with revenue growth from AI initiatives. Among the lowest spenders, only 28% can say the same — a 2.5× difference in revenue impact that traces directly back to foundational investment.

The use cases delivering the strongest revenue gains are sales and marketing, and product development — the areas closest to customers and revenue cycles. These aren’t abstract AI research projects. They’re the places where bad data and missing context have the most direct, immediate cost.

When you fix the foundation, the revenue-critical use cases work. When you don’t, they don’t.

The data also shows something that should concern any leader who believes the solution is simply spending more: the highest AI spenders still underperform if they’re spending on the wrong layer. More budget flowing into tools — without rebalancing toward foundations — doesn’t close the ROI gap. It entrenches it. This isn’t a quantity problem, it’s a composition problem.


The counterargument: Build the tool, fix the foundation later

Permalink to “The counterargument: Build the tool, fix the foundation later”

Here’s the hard truth: foundations take years to get right. Getting data quality, governance, and organizational capability to where they need to be is a multi-year program, not a quarter’s work.

But if you wait until your data is in perfect shape before deploying AI, you’ll never deploy. Get the tools in production, learn what breaks, and fix the underlying problems as they surface.

The organizations in Gartner’s most-satisfied cohort didn’t fix their foundations before deploying. They invested in foundations and tools simultaneously — at the right ratio. The 1.78× figure isn’t a sequencing argument. It’s a portfolio allocation argument. They’re doing both; they’ve just stopped treating tools as the hard part.

The “fix it later” approach produces exactly what the Gartner data shows: AI deployed, no measurable ROI, and mounting organizational frustration that expensive platforms didn’t deliver on their promise. The tools weren’t the problem. The problem was that the foundation was never there.


Three moves that close the AI ROI gap

Permalink to “Three moves that close the AI ROI gap”

If you’re a data or analytics leader trying to shift your organization into the 1-in-5, the prescription is specific.

1. Audit your current AI spend ratio

Permalink to “1. Audit your current AI spend ratio”

What percentage of your AI budget goes to tools — licenses, platform costs, model access — versus governance, data quality, and talent development? If your ratio is inverted relative to 1.78×, you’ve identified the problem. You don’t need a new tool. You need a reallocation.

2. Attack the inhibitors the data actually names

Permalink to “2. Attack the inhibitors the data actually names”

Data quality is the top blocker across every spending level. If you’re adding AI capability without first addressing the quality, lineage, and governance of the data feeding it, you’re adding complexity on top of a broken foundation. The model will find every gap you haven’t found yet — and act on it.

3. Engineer context into your architecture

Permalink to “3. Engineer context into your architecture”

At the Gartner D&A Summit, Rita Sallam was precise about this: “Context is now the new critical infrastructure.” But she elaborated, saying: “Context needs to be engineered into architecture so that agents can operate with trust and situational awareness.”

This is an architectural stance, not a vendor recommendation. The organizations winning at AI have made this shift. They’ve built the context layer that tells AI what their data means, who owns it, and whether it can be acted upon. The organizations losing haven’t.


What’s at stake

Permalink to “What’s at stake”

The organizations that are increasing their AI investments have a decision to make. They can continue allocating the majority of that investment to tools and produce the same 1-in-5 ROI rate the industry currently shows, or they can rebalance to the ratio the data supports.

Gartner calls it foundational investment. Rita Sallam calls it critical infrastructure. The Gartner research calls it a 60% allocation to data quality, governance, and talent that could increase agentic AI accuracy by up to 80%.

We call it the context layer.

The most important AI investment in your portfolio right now isn’t the model. It’s the layer underneath it — the one that makes sure the model knows what your data means, who owns it, and whether it can be trusted to act on it. Without that layer, you’re in the 4 in 5 pouring money in. With it, you’re in the 1 in 5 getting results out.

The math is simple. The work isn’t. But it’s the work that matters for successful AI.

Share this article

signoff-panel-logo

Atlan is the next-generation platform for data and AI governance. It is a control plane that stitches together a business's disparate data infrastructure, cataloging and enriching data with business context and security.

 

Bringing Context to Life for AI Agents. Activate 2026 · April 16 · Virtual · Save Your Spot →

[Website env: production]