Ontology vs. Semantic Layer: What Data Teams Need in 2026

Emily Winks profile picture
Data Governance Expert
Updated:04/01/2026
|
Published:01/30/2026
23 min read

Key takeaways

  • Ontologies define domain meaning using OWL/RDF; semantic layers translate raw data into governed business metrics
  • Semantic layers define how revenue is measured; ontologies define what a customer is and how it connects across systems
  • Semantic layers deploy in 2–6 months for fast ROI; ontologies take 6–18 months but unlock AI reasoning capabilities
  • Most production AI systems need both — the semantic layer for calculation, the ontology for comprehension

Listen to article

Listen: Ontology vs Layer

What's the difference between ontology and semantic layer?

An ontology defines the meaning of concepts and their relationships in a formal, machine-readable schema, typically using OWL or RDF. Implementation typically takes 6–18 months. A semantic layer translates raw database schemas into business-friendly metrics for consistent querying, with initial deployment in 2–6 months. Ontologies govern meaning at the domain level. Semantic layers govern access at the query level. Both are necessary for AI-ready data architecture, but they solve fundamentally different problems.

Key differences:

  • Purpose: Ontologies model domain knowledge for machine reasoning; semantic layers standardize business metrics for consistent access
  • Standards: Ontologies use RDF, OWL, SPARQL; semantic layers use YAML, SQL, LookML, DAX
  • Reasoning: Ontologies support automated inference from formal logic; semantic layers execute pre-defined calculations only
  • Implementation: Ontologies require 6–18 months and formal logic expertise; semantic layers deploy in 2–6 months
  • AI readiness: Ontologies provide deep domain understanding; semantic layers provide governed, deterministic metric access

Want to skip the manual work?

Assess Your Context Maturity


Ontology vs. semantic layer: comparison at a glance

Permalink to “Ontology vs. semantic layer: comparison at a glance”
Dimension Ontology Semantic layer
Primary purpose Define domain meaning: concepts, relationships, rules for machine reasoning Standardize business metrics and translate technical data into business terms
Origin Knowledge management, AI research, library science BI industry, analytics engineering
Core problem solved Representing and reasoning about domain knowledge across systems Consistent measurement and metric governance across tools and teams
Output Knowledge graphs, inference results, and integrated conceptual models Governed SQL queries, consistent metric calculations, standardized KPIs
Standards RDF, OWL, SPARQL (W3C Semantic Web standards) YAML/SQL definitions, OSI (emerging), LookML, DAX
Primary users Data architects, AI engineers, knowledge engineers Analytics engineers, BI teams, data product teams
Reasoning Supports automated inference from formal logic No inference; executes pre-defined calculations
Implementation effort 6 to 18+ months; requires formal logic expertise Two to six months for initial deployment; faster ROI
AI readiness Deep domain understanding for AI reasoning, not optimized for query execution Governed, deterministic metric access for AI agents, lacks conceptual depth

Consider a common scenario: finance reports revenue at $10.2 million in Power BI, marketing shows $10.4 million in Tableau, and an AI copilot surfaces $9.8 million in Slack. Three numbers, zero agreement.

When humans were the primary analytics consumers, teams reconciled discrepancies in meetings and moved on.

AI agents can’t do that. They don’t call colleagues for clarification or infer meaning from a column named rev_ttm_adj_v2. When an LLM encounters ambiguity, it hallucinates an answer or picks whichever pattern it finds first. A peer-reviewed study on knowledge-graph-augmented RAG found this happens 40% less often when domain knowledge is properly grounded, but only when the knowledge is structured, not assumed.

That’s why two architectural concepts are frequently conflated in data platform conversations: ontologies and semantic layers. Vendors often use the terms interchangeably. When organizations prioritize the wrong layer for their problem, AI projects encounter delays.

Key differences between ontology and semantic layer for data teams heading into 2026
Key differences data teams need to know heading into 2026. Source: Atlan.

What does an ontology do, and why does it matter?

Permalink to “What does an ontology do, and why does it matter?”

An ontology is a formal, machine-readable specification of domain concepts and their relationships. It encodes the rules that govern those relationships in a way both humans and machines can reason over. Expressed in OWL or RDF, ontologies answer: what does this thing mean, and how does it connect to everything else?

Tom Gruber’s 1993 definition: an ontology is “an explicit specification of a conceptualization.”

What are the core components of an ontology?

Permalink to “What are the core components of an ontology?”

Every ontology is built on three elements:

  1. Classes: The types of things in a domain. Customer, Product, Order, Region.
  2. Properties/relationships: How those classes connect. A Customer “places” an Order. A Product “belongs to” a Category.
  3. Axioms/rules: Logical constraints that govern the domain. Every Order must have at least one Product. A Customer can only exist in one Region at a time.
Core components of an ontology: Classes, Properties, and Axioms
Classes, Properties, and Axioms: the building blocks of every formal ontology. Source: Atlan.

These components are expressed using W3C Semantic Web standards. Resource Description Framework (RDF) represents knowledge as subject-predicate-object triples. Web Ontology Language (OWL) extends RDF with formal logic, enabling machines to infer relationships that were not explicitly stated. SPARQL is the query language for traversing RDF data.

What ontologies can do that nothing else can

Permalink to “What ontologies can do that nothing else can”

The distinguishing capability is inference. If the ontology knows that “Platinum customers have revenue above $1 million” and “Acme Corp has revenue of $2 million,” it can classify Acme Corp as Platinum without anyone having to code that rule. No semantic layer or BI tool does this. Ontologies derive new facts from existing ones using formal logic.

This matters in several ways.

When multiple systems use different names for the same concept (Customer vs. Client vs. Account), an ontology provides a single source of truth for meaning, not just a shared metric definition. This enables cross-system semantic consistency across heterogeneous platforms.

For AI, the implications run deeper. An agent grounded in an ontology can automatically connect customer purchase history to regulatory requirements and product hierarchies. The ontology gives the agent permission to reason, not just retrieve.

Then there’s compliance. Healthcare uses SNOMED CT for clinical terminology. Financial services use Financial Industry Business Ontology (FIBO) for regulatory data. In these industries, formal ontologies aren’t optional. They’re table stakes.

Real-world example: Palantir’s ontology bet

Permalink to “Real-world example: Palantir’s ontology bet”

Palantir’s Foundry platform is a visible enterprise ontology implementation. Their architecture integrates data, logic, action, and security into a single ontology-driven system.

But Palantir’s approach comes with a caveat. As one DEV Community practitioner noted, Palantir relies on forward-deployed engineers who work to deeply understand how a business operates, then manually build and maintain ontologies for each client. That model works for Fortune 500 budgets. It doesn’t scale to the average data team.

And that’s the core tension with ontologies: they’re powerful, but expensive and hard to build. Comprehensive ontologies need six to 18 months of implementation work. The tooling hasn’t kept up either. One practitioner on Hacker News noted that Protege, the most popular open-source ontology editor, has been effectively abandoned for years, and that most ontologists prefer hand-editing in plaintext Turtle format.



What does a semantic layer do, and how is it different?

Permalink to “What does a semantic layer do, and how is it different?”

A semantic layer sits between databases and every downstream tool. It translates raw schema into governed business concepts. It aims to ensure “revenue” means the same thing in Tableau, in an LLM pipeline, and in a REST API. Where ontologies govern meaning, semantic layers govern measurement, centralizing metric definitions so every consumer works from one source of truth.

If ontologies are about meaning, semantic layers are about measurement. The idea goes back to the 1990s. Business Objects introduced “universes” that abstracted complex database schemas into business terms. The core problem: different teams kept calculating “revenue” differently across tools, eroding trust and compounding reconciliation work. That problem has persisted, and AI adoption has amplified it.

Watch: How semantic layers govern metric definitions across BI tools and AI pipelines.

How a semantic layer works

Permalink to “How a semantic layer works”

A modern semantic layer has four components:

  • Metadata repository: Stores business definitions that map technical schemas to user-friendly concepts such as “Customer Lifetime Value” or “Monthly Recurring Revenue.”
  • Business logic engine: Centralizes metric calculations, KPI formulas, and aggregation rules. Define once, use everywhere.
  • Query translation: Converts business-level requests into optimized SQL. When someone asks for “revenue by region,” the semantic layer knows which tables to join and which filters to apply.
  • Access control: Enforces row-level and column-level security at the semantic level, so every downstream tool doesn’t have to reimplement permissions.

Semantic layer in 2026

Permalink to “Semantic layer in 2026”

In 2026, the semantic layer market is converging on open standards and portability — the “define once, use everywhere” promise is finally becoming infrastructure-grade. The dbt Semantic Layer defines metrics in version-controlled YAML files. It compiles those definitions through MetricFlow into optimized, dialect-specific SQL. In October 2025, dbt Labs open-sourced MetricFlow under the Apache-2.0 license, making it a portable metric-rendering engine that any vendor can build on.

Beyond dbt, the landscape includes BI-embedded semantic layers (Looker’s LookML, Power BI’s DAX, Tableau’s data models), standalone/universal layers like Cube and AtScale, and data-platform-native layers like Snowflake Semantic Views and Databricks Metric Views.

A major development is the Open Semantic Interchange (OSI) initiative. Launched in September 2025 by Snowflake, Salesforce, dbt Labs, BlackRock, and RelationalAI, OSI creates a vendor-neutral specification for semantic metadata exchange. The v1.0 specification was published in January 2026 under an Apache 2.0 license, and the working group has since expanded to include Atlan, Collibra, Databricks, and more than 30 other organizations.

OSI matters because proprietary semantic standards are a barrier to AI adoption. If metrics are locked within a single vendor’s platform, AI agents can only access governed definitions from that system. Open standards make semantic definitions portable.

What semantic layers do well, and where they stop

Permalink to “What semantic layers do well, and where they stop”

The value proposition is metric consistency. When 15 analysts across three continents all query “monthly active users,” a semantic layer ensures they get the same number. It eliminates the reconciliation meetings, the conflicting dashboards, and the “which spreadsheet is right?” conversations that drain hours per year.

For LLM and AI pipelines, semantic layers provide something raw SQL never can: pre-computed, governed context that an agent can trust without second-guessing.

The ceiling, though, is conceptual depth. A semantic layer can tell you what “revenue” means and how to calculate it. Ask it what a customer is, how customers relate to markets and regulatory frameworks, or whether a specific account should be flagged as high-risk based on domain rules, those questions fall outside what a semantic layer can answer. That’s ontology territory.

As Jessica Talisman wrote in Metadata Weekly: “A semantic layer is for lookup, an ontology is for context and reasoning.” The metadata in semantic layers is structural. The logic stays in syntax and compute, not in representation or context.

How do ontologies and semantic layers actually differ?

Permalink to “How do ontologies and semantic layers actually differ?”

Ontologies and semantic layers emerged from different fields to solve different problems. Ontologies come from knowledge management and AI research, encoding domain meaning in formal logic. Semantic layers come from BI, centralizing metric definitions so every tool calculates “revenue” the same way. One governs what things mean. The other governs how things are measured.

What is the simplest way to differentiate between ontologies and semantic layers?

Permalink to “What is the simplest way to differentiate between ontologies and semantic layers?”

A semantic layer tells you what your revenue is. An ontology tells you what a customer is. It tells you what relationships the customer has, what market they operate in, and why their revenue matters in the context of your business strategy.

Both are needed. The mistake is treating them as interchangeable.

Why is AI forcing ontologies and semantic layers to converge?

Permalink to “Why is AI forcing ontologies and semantic layers to converge?”

AI agents need both governed metrics and domain understanding to produce reliable outputs. Semantic layers provide the calculation; ontologies provide the comprehension. Neither alone is sufficient for production AI. The industry consensus is converging: semantic layers are becoming critical infrastructure, while agents relying solely on protocol-level access without a semantic foundation face high failure rates.

For nearly two decades, these two concepts occupied different worlds. Ontologies belonged to academic research, life sciences, and intelligence agencies. Semantic layers were the domain of BI teams and analytics engineers.

AI changed that.

Why does the data readiness gap stall AI projects?

Permalink to “Why does the data readiness gap stall AI projects?”

Gartner predicts that over 40% of agentic AI projects will be canceled by the end of 2027 due to escalating costs, unclear business value, or inadequate risk controls. A 2024 survey of 248 data management leaders paints a similar picture: 63% of organizations either lack or are unsure if they have the right data management practices for AI.

Many projects fail not because the models are wrong, but because the data foundation lacks semantic grounding.

What AI agents need from each world

Permalink to “What AI agents need from each world”

An AI-ready data architecture requires both ontologies and semantic layers, for different reasons.

Semantic layers solve the calculation problem. An AI agent querying revenue needs a deterministic definition: which tables to join, which filters to apply, and which edge cases to handle. Without this, the agent may generate incorrect queries or return numbers inconsistent with what Finance approved. Governed SQL generation and consistent calculations across tools are baseline requirements.

Ontologies address something deeper. Knowing how to calculate revenue isn’t the same as knowing what revenue means in the context of a customer’s market, their contract terms, or the regulatory framework they operate under. That kind of cross-domain reasoning — the ability to derive classifications from existing data without explicit programming — lives in the ontology layer.

Drop either layer, and you get a half-built system. Semantic layers without ontologies produce AI that’s precise but shallow. Ontologies without semantic layers produce AI that grasps concepts but can’t turn that understanding into governed, reliable numbers.

Semantic layers as critical infrastructure

Permalink to “Semantic layers as critical infrastructure”

At the Gartner Data & Analytics Summit in March 2026, the message was unambiguous: by 2030, universal semantic layers will be treated as critical infrastructure, on the same level as data platforms and cybersecurity. Rita Sallam, Distinguished VP Analyst, called context “the brain for AI” and declared universal semantic layers a “non-negotiable foundation.”

MCP alone won’t be enough. Analyst Andres Garcia-Rodeja warned that 60% of agentic analytics projects relying solely on the protocol will fail by 2028 without a semantic layer underneath.

Knowledge graphs received significant attention in 2025 as a core component of context-aware AI architectures. Industry analysts have been explicit: without knowledge graphs and semantic enrichment, data fabric architectures will not provide the contextual data necessary to reduce hallucinations in generative AI.

The convergence is already happening

Permalink to “The convergence is already happening”

The Futurum Group forecasts that the semantic layer will double its year-over-year growth rate from 16% in 2026 to 30% by 2031, making it the fastest-accelerating segment in the Data Intelligence stack.

At the same time, ontological capabilities are being integrated into mainstream data platforms. Microsoft Fabric IQ previews native support for ontologies for enterprise data teams. Vendors like Timbr.ai are building ontology-based semantic layers that add inference and relationship modeling on top of SQL, bridging the gap between traditional metric definitions and full domain reasoning.

Both are reaching toward the same goal from opposite directions.

Where do knowledge graphs and taxonomies fit in?

Permalink to “Where do knowledge graphs and taxonomies fit in?”

Knowledge graphs and taxonomies aren’t alternatives to ontologies or semantic layers. Knowledge graphs store the instances defined by ontologies, making domain concepts operational through cross-entity search, lineage, and relationship traversal. Taxonomies provide the simplest starting point for governance: hierarchical classification with no inference or lateral connections. Both are supporting infrastructure, not competing choices.

Knowledge graphs: Where ontology meets data

Permalink to “Knowledge graphs: Where ontology meets data”

A knowledge graph is a graph-structured database that stores entities and their relationships. If an ontology is the schema (defining what “Customer” means and how it relates to “Order”), a knowledge graph is the data (storing every actual customer and every actual order as connected nodes).

In practice, knowledge graphs operationalize ontology concepts. They power cross-entity search, data lineage tracing, and impact analysis. For AI agents that need to traverse context at runtime, knowledge graphs are the mechanism that makes it possible. A peer-reviewed MEGA-RAG study found that integrating knowledge graphs into retrieval-augmented generation pipelines achieved over 40% reduction in hallucination rates compared to standalone LLMs, specifically in biomedical question-answering tasks.

For more on the distinctions, see the guide to context graph vs ontology.

Taxonomies: The starting point for governance

Permalink to “Taxonomies: The starting point for governance”

Think of a taxonomy as a stripped-down ontology. It captures parent-child relationships only. No inference, no lateral connections, no formal logic. Product categorizations, data domain hierarchies, PII classification tags: these are all taxonomies, and they’re often the first governance artifact an enterprise builds.

When classification is all you need, a taxonomy does the job. But the moment AI agents need to reason across categories or infer relationships that cross hierarchical boundaries, you’ll need to formalize what you have into a proper ontology.

Dimension Ontology Semantic Layer Knowledge Graph Taxonomy
Primary purpose Define the domain meaning Govern metric access Store connected facts Classify into hierarchies
Data representation OWL/RDF axioms Metric definitions (YAML, SQL) Graph triples (nodes + edges) Parent-child trees
Industry standards OWL, RDF, SPARQL MetricFlow, LookML, MDX RDF, Property Graph, JSON-LD SKOS, controlled vocabularies
Primary user Data architects, AI engineers Analytics engineers, BI teams Data engineers, AI/ML teams Data stewards, governance teams
Query mechanism SPARQL, OWL reasoners SQL, REST API, semantic queries Graph query (Cypher, SPARQL) Browse/filter hierarchy
AI agent readiness High: formal reasoning High: structured context High: traversable context Low: classification only
Governance role Semantic authority Metric consistency Relationship integrity Asset classification
Atlan support Ontology import/mapping Semantic layer integration Native metadata graph Custom classification taxonomies

When should you use an ontology, a semantic layer, or both?

Permalink to “When should you use an ontology, a semantic layer, or both?”

Three paths exist. Start with a semantic layer when metric inconsistency is the primary problem. Start with an ontology when regulatory compliance or cross-domain AI reasoning requires formal definitions. Most enterprises preparing for production AI will need both, beginning with the semantic layer for fast ROI and layering ontological capabilities as use cases mature.

If your use case is… Start with Then add Atlan’s role
Consistent BI metrics across teams Semantic Layer (dbt, AtScale) Ontology for cross-domain consistency Semantic layer integration + governed metrics
AI agent needs structured context Knowledge Graph + MCP Ontology for reasoning capability Native metadata graph + MCP support
Classify and tag data assets Taxonomy Knowledge graph for relationship context Custom classification + lineage
Cross-system entity resolution Ontology (OWL/RDF) Knowledge graph for instance data Ontology mapping + active metadata
Regulatory/compliance audit trail Ontology + Taxonomy Semantic layer for metric governance Full context layer: all four
LLM pipeline with governed access Semantic Layer Knowledge graph for entity context Active metadata + semantic integration

Start with a semantic layer when:

Permalink to “Start with a semantic layer when:”
  • Analysts across teams disagree on what “revenue” or “active user” means
  • You’re deploying conversational AI or “chat with your data” and need governed metric access
  • Your data stack is relatively consolidated (one primary warehouse, a few BI tools)
  • You need fast ROI — semantic layers typically take two to six months for initial deployment

Start with an ontology when:

Permalink to “Start with an ontology when:”
  • Regulatory compliance requires formal representation of domain concepts (healthcare, financial services, government)
  • AI use cases require cross-domain reasoning, connecting customer data to product data to regulatory data
  • Data integration spans dozens of heterogeneous systems with conflicting vocabularies
  • You need inference capabilities: automated classification, risk scoring, or anomaly detection based on domain rules

Build both when:

Permalink to “Build both when:”

Most enterprises preparing for production AI will eventually need both. The practical path starts with the semantic layer: define your core metrics, govern them centrally, and get your BI tools and AI agents reading from the same source. That alone delivers ROI within months. Domain knowledge comes next, and you don’t need a formal ontology engineering project to start. A business glossary is a lightweight ontology. Data products with governed schemas carry ontological context without requiring a full OWL implementation.

Once those foundations are in place, add governance and trust signals (lineage, quality scores, ownership, certification) to turn raw definitions into auditable context. Then expose the whole stack to AI agents via MCP or similar protocols. That’s the point where the context layer stops being a concept and starts being infrastructure.

What is the context layer, and how does it unify everything?

Permalink to “What is the context layer, and how does it unify everything?”

A context layer combines semantic definitions, domain knowledge, data lineage, governance policies, and trust signals into one governed interface that both humans and AI agents can consume. It treats ontologies and semantic layers as complementary components of a single system, rather than competing architectural choices. Atlan’s active metadata platform is built to operate across all of these layers.

Most enterprise AI deployments require both ontologies and semantic layers. The industry is converging on this concept precisely because no single layer covers the full stack. An ontology without metric governance leaves AI agents unable to calculate reliably. A semantic layer without domain knowledge leaves them unable to reason. A knowledge graph without governance policies leaves them unable to enforce trust. The context layer is the architecture that connects all of these.

This is what context engineering addresses. The discipline emerged in 2025 and 2026 around a simple premise: AI systems need structured context to produce reliable outputs. Ontologies and semantic layers are the building blocks.

How the layers connect

Permalink to “How the layers connect”

Start with the ontology. A financial ontology defines “Revenue Recognition” as a concept with specific properties and rules. The semantic layer picks that up and turns it into calculation logic: which tables, which joins, which aggregations produce the governed number.

It works the other way, too. A healthcare semantic layer defines “Patient Episode” as a metric. But what counts as an episode? How do episodes relate to diagnoses? What constraints apply across care settings? The ontology answers those questions, giving the metric its clinical meaning.

The knowledge graph ties everything together. It links ontological concepts to semantic definitions, lineage, and governance policies. When the ontology defines “Customer Lifetime Value” and the semantic layer implements the calculation, the knowledge graph tracks every upstream dependency and downstream consumer. AI agents consume all of this through protocols like MCP, pulling domain understanding, validated calculations, and relationship context from the full stack in a single query.

Atlan as the context layer

Permalink to “Atlan as the context layer”

Atlan’s active metadata platform operates across these layers:

  • Metadata graph: A native knowledge graph connecting metadata from 100+ tools across the data stack
  • Business glossary: An ontology and taxonomy synthesis that defines and relates business terms organization-wide
  • Semantic layer integrations: Native ingestion of dbt Semantic Layer and BI semantic models today, with Snowflake Semantic Views and other OSI-compliant models coming online, pulling governed metric definitions into the broader metadata graph
  • MCP server: Exposes unified context to any MCP-compatible AI agent at inference time

This approach doesn’t force enterprises to choose between ontology and semantic layer investments. Teams implement whichever semantic layer fits their stack, build domain knowledge through Atlan’s business glossary and data products, and the platform unifies both into a governed context layer.

Frequently asked questions on ontologies vs. semantic layers

Permalink to “Frequently asked questions on ontologies vs. semantic layers”

Is an ontology the same as a knowledge graph?

Permalink to “Is an ontology the same as a knowledge graph?”

They’re related but distinct. An ontology defines the schema: the rules, concepts, and relationships in a domain. A knowledge graph stores the instances. You can build a knowledge graph from an ontology, but you can have either without the other.

Does dbt replace an ontology?

Permalink to “Does dbt replace an ontology?”

Not even close. dbt’s Semantic Layer governs metric access and consistency. An ontology governs domain meaning and relationships. They’re complementary: ontologies define what “customer” means; semantic layers define how to measure “customer revenue.”

Can I start with a semantic layer and add an ontology later?

Permalink to “Can I start with a semantic layer and add an ontology later?”

That’s actually the most common path. Most enterprises start with a semantic layer for immediate analytics consistency, then layer on ontological capabilities as AI use cases demand richer context. Your business glossary is already a lightweight ontology. Formalizing it is an incremental step, not a rip-and-replace.

What’s the role of RDF and OWL in modern data stacks?

Permalink to “What’s the role of RDF and OWL in modern data stacks?”

They’re the W3C standards for encoding ontologies in machine-readable format. Think of them as the foundation for semantic interoperability. Modern tools like Microsoft Fabric IQ and Palantir’s AI Platform use OWL and RDF concepts under the hood, even when the end user never touches a triple directly.

How does MCP relate to semantic layers and ontologies?

Permalink to “How does MCP relate to semantic layers and ontologies?”

MCP (Model Context Protocol) is a runtime protocol that lets AI agents query structured context. It’s the pipe, not the water. Semantic layers and ontologies provide the structured context that MCP surfaces. Together, they form the AI-readable context layer enterprises are building in 2025 and 2026.

What is the Open Semantic Interchange (OSI)?

Permalink to “What is the Open Semantic Interchange (OSI)?”

A vendor-neutral specification for semantic metadata exchange. Led by Snowflake, dbt Labs, Salesforce, and 30+ industry partners, OSI is published under an Apache 2.0 license. The v1.0 specification was finalized in January 2026.

Are semantic layers enough for AI agents, or do they need ontologies too?

Permalink to “Are semantic layers enough for AI agents, or do they need ontologies too?”

Semantic layers give AI agents governed metrics, which solves half of the problem. LLMs need to understand what things are, how they relate, and what actions are possible. That’s knowledge representation, not metric definition. For simple “chat with your data” use cases, a semantic layer might be enough. For agents that need to reason across domains or infer relationships, you’ll need an ontological structure underneath.

Is Palantir’s ontology approach realistic for most data teams?

Permalink to “Is Palantir’s ontology approach realistic for most data teams?”

Palantir proved that ontology-first architecture works at enterprise scale. But their model depends on forward-deployed engineers who embed with each client to manually build and maintain ontologies. That’s expensive, and it hasn’t scaled beyond large enterprises. LLM-assisted ontology generation is starting to close this gap. For most organizations, starting with a business glossary and evolving toward formal ontology is more realistic than a Palantir-style deployment.

Is a semantic layer just a repackaged data mart or OLAP cube?

Permalink to “Is a semantic layer just a repackaged data mart or OLAP cube?”

Semantic layers in 2026 solve a different problem than the OLAP cubes of 2006. The architecture may look familiar, but the consumer has changed from human analysts to AI agents.

Can my data team build an ontology, or do I need knowledge engineers?

Permalink to “Can my data team build an ontology, or do I need knowledge engineers?”

Building a full OWL-based ontology requires specialized skills that most data teams don’t have. Ontology engineering and metric definition are fundamentally different disciplines. But you don’t have to start at the deep end. Business glossaries, governed data products, and classification taxonomies are all lightweight forms of ontology that data teams already know how to build.


Does your AI agent know what “revenue” means, or just where to find it?

Permalink to “Does your AI agent know what “revenue” means, or just where to find it?”

Most enterprise AI systems today can find revenue, but they cannot explain what it means. That’s the gap ontologies and semantic layers fill together. And right now, most teams have built only one half of the answer.

A semantic layer gives your agent a reliable path to the number. It governs the calculation, enforces the joins, and makes sure every tool returns the same $10.2 million. That’s valuable. For many teams, it’s the first thing to build and the fastest path to trustworthy analytics.

An ontology does something different entirely. It gives the agent understanding: what a customer is, how that customer connects to products and markets and regulatory frameworks, and why one revenue figure matters more than another in a specific business context.

Without that layer, AI agents operate as precise calculators without conceptual understanding of what they’re calculating.

Successful enterprises didn’t pick one over the other. They started with the semantic layer for immediate metric consistency. Then they built domain knowledge incrementally, often through business glossaries and data products that carry ontological context without requiring a PhD in knowledge engineering. The result is a context layer in which AI agents receive both the governed number and its meaning.

And context is what Atlan’s context engineering approach is built to address by unifying metric governance, domain knowledge, lineage, and trust signals into a single layer that both humans and AI agents can consume.

Book a Demo →


Share this article

signoff-panel-logo

Atlan is the next-generation platform for data and AI governance. It is a control plane that stitches together a business's disparate data infrastructure, cataloging and enriching data with business context and security.

 

Everyone's talking about the context layer. We're the first to build one, live. April 29, 11 AM ET · Save Your Spot →

Bridge the context gap.
Ship AI that works.

[Website env: production]