Get the blueprint for implementing context graphs across your enterprise.
Get the Stack GuideWhat are the core components of ontology?
Permalink to “What are the core components of ontology?”An ontology is a structured system of interrelated components, each serving a specific function in how meaning is encoded and used by AI systems.
Concepts and classes
Permalink to “Concepts and classes”Concepts, also called classes, are the fundamental building blocks of an ontology. They define the categories of things that exist within a domain. In an enterprise data ontology, classes might include Customer, Transaction, Product, Risk Event, or Regulatory Filing.
Classes form the vocabulary of the ontology. Every other component relates back to them. Without well-defined classes, downstream components such as relationships and rules lack a stable foundation.
Properties and attributes
Permalink to “Properties and attributes”Properties describe the characteristics of each class. They answer the question: what do we know about this type of thing?
Properties fall into two broad categories:
-
Data properties: Scalar values attached to a concept, such as customer_id, order_amount, or account_status
-
Object properties: Links from one concept to another, such as a customer being associated with a specific account tier or geographic region
Relationships
Permalink to “Relationships”Relationships define how classes connect to one another. Unlike taxonomies, which are limited to hierarchical “is-a” connections, ontologies support multiple named relationship types with precise logical semantics:
-
is-a: Expresses class membership or subclass hierarchy, such as “a SavingsAccount is-a BankAccount”.
-
has-part / part-of: Compositional relationships between wholes and their components.
-
governed-by: Links an entity to the policy or rule that governs it.
-
depends-on: Captures upstream dependencies between data assets, processes, or systems.
-
places / receives: Domain-specific action relationships, such as “Customer places Order”.
Axioms and rules
Permalink to “Axioms and rules”Axioms are logical statements that constrain what is valid within the ontology. They are what elevate an ontology above a simple data dictionary or taxonomy and make it useful for automated reasoning:
-
Consistency constraints: For example, a patient cannot be prescribed a medication to which they are allergic.
-
Derivation rules: For example, If all managers are employees and John is a manager, then John is an employee.
-
Business rules: For example, Revenue must equal the sum of all recognized transaction amounts within a reporting period.
Standard technologies
Permalink to “Standard technologies”Two W3C standards underpin most formal ontology implementations in AI and data engineering:
-
RDF (Resource Description Framework): RDF is the standard, graph-based data model that represents facts as subject-predicate-object triples, providing a flexible and interoperable foundation for linking data.
-
OWL (Web Ontology Language): Built on RDF, OWL supports richer class definitions, property constraints, and logical axioms, enabling automated inference and consistency checking by reasoning engines.
Why does ontology matter in AI? Why now?
Permalink to “Why does ontology matter in AI? Why now?”Ontology has grown in significance because of the fragmented semantics problem that enterprises face today.
Large language models and AI agents do not inherently know what “customer,” “churn,” or “revenue” means in your organization. They infer meaning from statistical patterns in training data, which may contradict the precise, agreed definitions that your finance, sales, and product teams have developed over years.
This creates a fragmented semantics problem that surfaces as:
-
Conflicting AI outputs: Two agents answering the same business question give different answers because they resolve the same term differently.
-
Hallucinations grounded in wrong definitions: The model produces confident, fluent responses that are technically plausible but factually wrong for your specific domain.
-
Context rework for every AI project: Each new agent or analytics use case requires weeks of ad-hoc context building because no shared semantic model exists.
Gartner’s 2026 D&A Summit positioned semantic layers and knowledge graphs as foundational infrastructure for agentic AI. Ontologies are a fundamental component of the knowledge graph, storing crucial information about entities and relationships.
Ontologies address the fragmented semantics problem in four concrete ways:
-
Hallucination reduction: Explicit rules and canonical definitions constrain what AI systems can assert, replacing statistical guesswork with governed knowledge.
-
Interoperability: Multiple tools, agents, and systems share the same machine-readable meaning for shared terms, eliminating brittle point-to-point integrations.
-
Automated inference: Reasoning engines can derive new, valid facts from existing relationships without requiring those facts to be explicitly stored.
-
Cross-domain integration: Ontologies provide the semantic glue for connecting heterogeneous systems in regulated environments such as finance, healthcare, and government, where definitional precision is non-negotiable.
Besides these, the rise of active metadata platforms like Atlan that embed ontological principles into the metadata and governance workflows teams already use have made ontology-first approaches practical in 2026.
Inside Atlan AI Labs & The 5x Accuracy Factor
Download E-BookHow is ontology different from the knowledge graph, semantic layer, context layer? Key differences explained
Permalink to “How is ontology different from the knowledge graph, semantic layer, context layer? Key differences explained”Ontology, taxonomy, knowledge graph, semantic layer, and context graph are related concepts that are frequently conflated. Each plays a distinct role in enterprise AI and data infrastructure.
Ontology vs taxonomy vs knowledge graph
Permalink to “Ontology vs taxonomy vs knowledge graph”Taxonomy is the simplest of the three. It organizes concepts into a hierarchical parent-child tree where relationships are almost exclusively “is-a.”
Ontology extends taxonomy into a rich network of concepts, properties, typed relationships, and logical axioms. Knowledge graph takes the ontology one step further by instantiating it with real-world entities and actual relationships.
Atlan’s metadata knowledge graph is a specific application of this idea: an ontology of data assets, metadata, policies, and processes, realized as a queryable graph across the entire data estate.
Ontology vs. semantic layer
Permalink to “Ontology vs. semantic layer”An ontology models domain knowledge at a conceptual level, whereas a semantic layer translates technical database schemas into business-friendly metrics and dimensions for analytics and BI.
With ontology, you can understand what “Revenue” means in principle and how it relates to “Transaction,” “Contract,” and “Reporting Period”.
The semantic layer implements queryable logic, answering questions like: how do I calculate Monthly Recurring Revenue from these five tables? What filters apply to this regional dashboard?
Ontology vs. context graph vs context layer
Permalink to “Ontology vs. context graph vs context layer”An ontology is a static, versioned semantic backbone. A context graph is dynamic and continuously updated, capturing operational decisions, temporal context, and provenance.
A context layer unifies both, along with a third dimension: runtime context. For example, Atlan’s context layer combines the enterprise data graph, AI-generated context, active ontology and institutional memory, and runtime context into a single infrastructure that AI agents can query via MCP.
Key differences summarized
Permalink to “Key differences summarized”| Aspect | Ontology | Taxonomy | Knowledge graph | Semantic layer | Context graph | Context layer |
|---|---|---|---|---|---|---|
| Nature | Formal semantic model | Hierarchical classification | Graph of real entities | Analytics abstraction | Dynamic operational graph | Unified AI infrastructure |
| Structure | Concepts, relationships, axioms | Parent-child tree | Nodes and edges instantiating an ontology | Dimensions, metrics, joins | Entities, decisions, policies, events | Data graph + ontology + context + runtime |
| Relationships | Multiple typed, with logic | Mostly “is-a” | Typed, based on ontology | Logical joins and filters | Temporal, provenance-based | All of the above |
| Update model | Deliberate, versioned governance | Deliberate, versioned | Continuous with data | Managed releases | Continuous, event-driven | Continuous, AI + human |
| Primary users | AI systems, knowledge engineers | Search, navigation, BI | AI, governance, search | BI tools, analysts | AI agents, governance | AI agents, humans, governance |
| Reasoning support | Full inference and consistency | Minimal | Semantic and graph traversal | Query logic only | Provenance and lineage | Full stack |
| Use cases | AI reasoning, cross-system semantics, regulatory compliance | Navigation, categorization, filtered search | Semantic search, RAG grounding, impact analysis | BI reporting, KPI definitions, analytics | Decision tracing, policy enforcement, provenance | Agentic AI, governed retrieval, enterprise memory |
What does an ontology-first architecture look like?
Permalink to “What does an ontology-first architecture look like?”An ontology-first architecture is an approach where business concepts and relationships form the primary backbone for your enterprise AI systems. Your AI agents read and write that ontology, rather than directly to schemas or prompts.
This is a deliberate inversion of the schema-first and prompt-first approaches that dominated early enterprise AI. In a schema-first system, agents are wired directly to database tables. In a prompt-first system, business logic is buried inside prompt templates. Both become brittle as systems evolve and use cases multiply.
An ontology-first architecture avoids this by making the shared semantic model the stable interface layer that neither agents nor data systems need to renegotiate every time something changes.
Core components of an ontology-first architecture
Permalink to “Core components of an ontology-first architecture”1. Concept and relationship model (the ontology)
The foundation is a version-controlled ontology specification capturing the domain’s entities, attributes, and relationships. This can be expressed in JSON-LD, RDF, OWL, or a graph-native format. It is reviewable by both business and technical stakeholders and serves as the authoritative reference for what terms mean across the organization.
2. Ontology-to-implementation mappings
The ontology is connected to physical systems through explicit mappings. For example, the concept “Customer” maps to a Snowflake table, a Salesforce object, and a Kafka event stream. These mappings are maintained as first-class artifacts, not buried in application code.
3. Ontology-aware tools and agents
AI agents interact with the data estate through tools defined in terms of ontology entities and operations, such as get_customer, update_contract, or create_incident. Because these tools speak the language of the ontology rather than the language of raw schemas, they remain stable even as underlying implementations change and their outputs are traceable back to governed definitions.
4. Incremental, domain-driven rollout
A practical ontology-first implementation does not require a multi-year OWL modeling project. The recommended approach is to:
- Start with a minimal ontology covering one domain, such as customer health or financial reporting.
- Map that ontology to real data assets in your existing stack.
- Build ontology-aware tools for agents operating in that domain.
- Deploy, evaluate, and iterate before expanding to additional domains.
How can Atlan help with ontology for your data and AI estate?
Permalink to “How can Atlan help with ontology for your data and AI estate?”Atlan turns your fragmented definitions, metadata, and tribal knowledge into an active ontology and context layer that AI agents and humans can trust. Atlan embeds ontological principles into the metadata and governance workflows data teams already use, making the path from fragmented semantics to a governed, queryable knowledge structure practical rather than theoretical.
Key capabilities include:
1. Enterprise Data Graph to connect all your metadata.
Permalink to “1. Enterprise Data Graph to connect all your metadata.”Atlan’s enterprise data graph unifies metadata from business systems, data warehouses, BI tools, pipelines, and governance workflows into a single connected graph. Lineage, query history, semantic relationships, quality signals, and ownership are all interconnected.
This graph is the empirical foundation on which Atlan’s ontology and context capabilities are built and reflects what your organization actually has, rather than an idealized model disconnected from production reality.
2. AI-generated ontology and context to bootstrap a semantic model.
Permalink to “2. AI-generated ontology and context to bootstrap a semantic model.”Atlan’s enrichment studio agents automatically generate descriptions, link terms to business concepts, identify metrics and KPIs, extract common query patterns, and bootstrap an ontology from the evidence already present in your data estate.
As a result, you get a working ontological structure without the overhead of manual modeling, by surfacing and formalizing the implicit knowledge already embedded in metadata, lineage, and usage patterns.
3. Active ontology and knowledge graph to keep all assets live and queryable.
Permalink to “3. Active ontology and knowledge graph to keep all assets live and queryable.”Atlan’s business glossary functions as an enterprise ontology in action. Domain concepts such as Product, Customer, and Risk Event are modeled with hierarchies, related terms, and typed relationships.
Each glossary term is linked to the actual tables, columns, dashboards, and reports that implement it, with column-level lineage showing how concepts flow through pipelines into downstream reports. Atlan automatically discovers related terms and linked assets, keeping the ontology connected to live metadata signals.
4. Context graph and runtime context to capture decision-making accountability.
Permalink to “4. Context graph and runtime context to capture decision-making accountability.”Beyond the static semantic model, Atlan’s context graph captures the operational history of decisions: who certified a dataset, which policy governed an approval, what changed between data runs, and what the downstream impact of a schema change would be.
At query time, AI agents access this context via Atlan’s MCP server, receiving not just definitions and lineage but the full operational context required to act reliably without hallucinating.
Real stories from real customers building an enterprise context layer for agentic AI
Permalink to “Real stories from real customers building an enterprise context layer for agentic AI”Mastercard: Embedded context by design with Atlan
"AI initiatives require more context than ever. Atlan's metadata lakehouse is configurable, intuitive, and able to scale to hundreds of millions of assets. As we're doing this, we're making life easier for data scientists and speeding up innovation."
Andrew Reiskind, Chief Data Officer
Mastercard
See how Mastercard builds context from the start
Watch nowCME Group: Established context at speed with Atlan
"With Atlan, we cataloged over 18 million data assets and 1,300+ glossary terms in our first year, so teams can trust and reuse context across the exchange."
Kiran Panja, Managing Director
CME Group
CME's strategy for delivering AI-ready data in seconds
Watch nowMoving forward with an ontology-first architecture for your data and AI estate
Permalink to “Moving forward with an ontology-first architecture for your data and AI estate”Ontology makes AI trustworthy. The convergence of agentic AI, large language models, and complex multi-system data estates has made the question of shared semantic meaning unavoidable. Organizations that lack a governed ontological foundation are discovering it the hard way: through hallucinating agents, conflicting metric definitions, and AI projects that stall because no one can agree on what the data means.
The path forward begins with unifying the knowledge you already have, formalizing it incrementally, and connecting it to the systems and agents that need it. The organizations building reliable AI in 2026 share one characteristic: they invested in the semantic and context foundation before they deployed the agents.
FAQs about ontology
Permalink to “FAQs about ontology”1. What is ontology in simple terms?
Permalink to “1. What is ontology in simple terms?”An ontology is a structured vocabulary that defines what things are and how they relate to each other in a specific domain. In everyday terms, it is a shared language for a system or organization: one that is precise enough for computers to reason from, not just for humans to read. When an AI system knows that a “Customer” places an “Order” and that an “Order” must have a value greater than zero, it is working from an ontology.
2. What is the difference between an ontology and a database schema?
Permalink to “2. What is the difference between an ontology and a database schema?”A database schema describes the physical structure of stored data: tables, columns, data types, and constraints. An ontology describes the conceptual meaning of that data: what entities exist, how they relate, and what rules govern valid states. A schema tells a system where data lives. An ontology tells a system what that data means. The two work together: ontologies are commonly mapped to schemas, but they remain stable when schemas change, which is precisely why ontology-first architectures are more resilient than schema-first ones.
3. What is the difference between an ontology and a knowledge graph?
Permalink to “3. What is the difference between an ontology and a knowledge graph?”An ontology defines the rules and structure: the classes, relationship types, and axioms that govern a domain. A knowledge graph populates that structure with real-world instances and actual relationships. The ontology says “a Customer can place an Order.” The knowledge graph records that Customer 123 placed Order 456 on March 1, 2026. You can have an ontology without a knowledge graph (a purely conceptual model), but a well-formed knowledge graph always has an ontology governing its structure, whether explicit or implicit.
4. What is the difference between an ontology and a taxonomy?
Permalink to “4. What is the difference between an ontology and a taxonomy?”A taxonomy organizes concepts into a hierarchical tree of parent-child relationships, primarily “is-a” connections. An ontology extends this to include multiple types of relationships, logical constraints, and inference rules. A taxonomy tells you that a Gaming Laptop is a type of Laptop, which is a type of Electronics. An ontology can additionally express that a Gaming Laptop requires a Discrete GPU, is governed by a Consumer Electronics Warranty Policy, and cannot be classified as enterprise hardware without a specific certification. Ontologies are significantly more expressive and support automated reasoning in ways that taxonomies cannot.
5. Are ontologies only relevant for large enterprises?
Permalink to “5. Are ontologies only relevant for large enterprises?”No. While formal OWL ontologies have historically been associated with large enterprises and academic research, the principles of ontology-first design apply at any scale. A startup that maintains a well-structured business glossary linked to its data assets is effectively operating with a lightweight ontology. The investment required scales with the complexity of the domain and the number of systems and agents that need to share meaning, not with organizational size alone.
6. How do ontologies help reduce AI hallucinations?
Permalink to “6. How do ontologies help reduce AI hallucinations?”Hallucinations in AI systems occur when a model generates plausible-sounding but factually incorrect outputs, typically because it lacks grounded, authoritative knowledge about the domain. Ontologies reduce hallucinations by constraining what an AI system can assert. When an agent retrieves context from an ontology-backed knowledge graph, it receives explicit, governed definitions and relationships rather than inferring meaning from statistical patterns alone. Logical axioms further constrain valid outputs, preventing the model from generating answers that violate domain rules. Research on context-graph-driven retrieval shows hallucination reductions exceeding 40% compared to vanilla retrieval approaches.
7. What is an example of ontology in AI in practice?
Permalink to “7. What is an example of ontology in AI in practice?”A financial services firm builds a domain ontology defining entities such as Customer, Account, Transaction, and Regulatory Filing, along with the relationships and rules governing them. This ontology is mapped to tables in their data warehouse and linked to their BI dashboards and compliance systems. When an AI agent is asked “which customers are at risk of regulatory non-compliance?”, it queries the ontology-backed knowledge graph to retrieve the governed definition of compliance risk, the accounts and transactions relevant to each customer, and the policies that apply. The result is a traceable, auditable answer grounded in the firm’s own authoritative definitions, rather than a statistically plausible guess based on training data patterns.
Share this article
