Everyone in data and AI is talking about context engineering. Most of the conversation focuses on retrieval pipelines, semantic layers, and prompt design. But there is an older discipline that already solved a large part of this problem, and the industry quietly abandoned it a decade ago. What we lost when we stopped doing conceptual modeling is exactly what AI agents need now.
What problem are AI teams really solving when they talk about “context”?
Permalink to “What problem are AI teams really solving when they talk about “context”?”Connecting data with its business context means connecting data with real value; value creation happens in the business, and whatever we do with data needs to be as closely connected with the business as possible. This has always been the case, long before GenAI — good context is helpful for human users as well. It’s just that it hasn’t been obligatory, as we’ve been able to count on the tacit knowledge of the data consumers to cross that final mile between a field called “cust_lt_val” and actual decision-making.
LLMs make that context obligatory. While the models have a general grasp of how things usually work based on their training data, they don’t understand how things work in this particular organization with its idiosyncrasies. To make data useful for AI consumers, we need to add more information around the data. This is the context problem: what should that information be, and how is it served to the AI?
How did conceptual modeling quietly disappear?
Permalink to “How did conceptual modeling quietly disappear?”The problem of business context is not new. We always knew it’s important to understand how the data in our systems reflects our business reality. Having that understanding has always helped to both design new solutions to match the real needs and to document them in a business-friendly way.
One way of forming that understanding is conceptual modeling. It used to be so that we would always do data modeling in the design stage of any project, and we would proceed along a well-known path: conceptual, logical, physical. From the business understanding of what this data is about to use-case specific structures and technical implementation design.
But when the underlying technical capabilities went through rapid evolution in the Big Data era, it seemed that technical implementation design for data storages was no longer really necessary. After all, you could just load massive amounts of data into Hadoop or equivalent — why worry about storage design!
Unfortunately, this line of thought led also to the wide-spread abandonment of conceptual and logical modeling. The lowest-level model wasn’t thought necessary, so the higher levels preceding it were also forgotten. However, with the disappearance of conceptual modeling, we lost a valuable connection to business context. What is this data actually about?
What does a conceptual model know that your warehouse never will?
Permalink to “What does a conceptual model know that your warehouse never will?”A bunch of tables in a data warehouse (or a lakehouse, or a lake, or anything really) can tell us about the technical decisions made by the engineer during implementation. This field is a string, that’s a number, and here we have a uniqueness constraint. All this technical metadata is valuable, and over time we developed great data cataloging capabilities for extracting it, but it doesn’t tell us anything about semantics, i.e. the meaning of the data.
A conceptual model says “we have a thing called Customer, and the Customer has two subtypes: B2B and B2C, and only the B2B Customer can have Sales Contracts with us, while both of them can make Orders”. The tables in our database tell us how the tables are connected to each other — but there’s no guarantee that the tables will neatly map to the business concepts of Customer, Sales Contract, and Order!
The conceptual model captures and encodes high-level business logic. An accompanying glossary gives definitions for each of the business terms. None of that information can be derived from the bunch of tables sitting in a warehouse — it’s about understanding and modeling the business itself rather than a set of technical objects.
What is the difference between the data plane and the knowledge plane?
Permalink to “What is the difference between the data plane and the knowledge plane?”The technical objects we create for a variety of solutions can (and will) live on a variety of technological platforms. If the table “dim_cust” exists in one schema on one platform, how do we know if it contains information about the same things as “clients_aggregate” in another schema on another platform?
Semantics — the meaning of various business terms, and understanding of their relationships — is not specific to a single technical data object. It’s by definition information about the business concepts. We can’t manage semantics on the level of individual data objects; we need a wider, cross-solution and cross-platform repository of knowledge, that is technology-agnostic.
The data plane, where all the technical data objects live, and the knowledge plane, in which our semantic understanding lives, must be separate. Trying to manage semantics as part of the data object’s technical metadata is a fool’s errand and only leads to semantic silos.
However, the two planes must be linked to each other. To be able to tell our AI (or human!) data consumers what “cust_lt_val” means, there needs to be a connection between the technical data object living on the data plane and the semantic object (business term) living on the knowledge plane.
What predictable failures show up when the knowledge plane is missing?
Permalink to “What predictable failures show up when the knowledge plane is missing?”If that knowledge plane doesn’t exist, two main failure modes commonly appear.
In the first case, semantics is entirely missing. There simply isn’t any extra information “around” the technical data object that could tell the AI that “this here means Customer Lifetime Value, and this is how it has been calculated”. This means that the AI agent looking for answers from the data will resort to guessing, based on its generic understanding of how things usually work. Sometimes, that guess might be good enough. Quite often — and especially when the topic is something unique and special to the particular organization (which are also usually the most valuable topics!) — it will fail to understand the nuances involved. We say the AI “hallucinates”, but really it is just guessing and reverting back to the average.
In the second case, usually when the organization has been hitting its head in the wall of missing semantics enough, a series of quick fixes are applied. Semantics is understood to be crucial, so the organization rushes to apply some semantics anywhere. At least the AI will work a bit better, surely? But this rush leads to solution-level semantic silos. An individual dataset might now have some definitions attached to its tables and other objects, and that can indeed help when the AI is looking into that particular solution. But when the organization starts applying agentic workflows with wider, cross-functional scope, the AI will soon run into inconsistencies and unexplainable differences between the semantic information it gets from different solutions. No two solutions have the same definitions for the same thing, and comparing and combining data becomes impossible.

Semantic silos vs. shared semantics — what the knowledge plane makes possible. | Source: Juha Korpela
How do you turn conceptual modeling into a living context layer?
Permalink to “How do you turn conceptual modeling into a living context layer?”Conceptual modeling was always known to be a great method of capturing the core semantic information for data design and documentation: what are the business things this data is about, and how do they connect to each other in real life?
We can now re-apply those well-known and well-tested methods for building context for AI (as well as humans!). Conceptual modeling can be utilized for so much more now than just data storage design — it can surface and document semantics that otherwise would remain tacit knowledge and thus unattainable for AI.
It’s just a matter of figuring out how to manage that information beyond a single diagram. In fact, it’s not necessarily the model diagram itself that is the most important outcome of conceptual modeling; the information captured in a model can and should be made available in all kinds of different formats. Conceptual models can act as the starting point of ontologies and knowledge graphs. Individual model scopes can connect to each other — an enterprise data model doesn’t need to be a massive multi-year documentation effort, but a more organic result of modelers in different domains and different projects producing semantic information that all becomes part of the same interconnected knowledge base. Around such a strong base of core entities and relationships, we can add more and more information gathered from documentation and elsewhere with the help of AI. Building on top of and around the core of well-modeled concepts keeps our knowledge in sync and ensures we share an understanding of how our particular business really works.
Conceptual modeling has always been a method of creating and capturing an understanding of what the business is about, and connecting that to data. Now more than ever that information is crucial for success. Building a true context layer for AI can’t be a micro-level siloed effort; we need to create a cross-solution & cross-platform shared understanding of our core semantics, the foundation any context layer is built on. Conceptual modeling offers a way to build that piece by piece and to connect our data solutions to their real business meaning, without which we can’t cross that last mile to delivering real business value.
Note: Views expressed are those of the contributor. All submissions are vetted for quality and relevance. Context and Chaos is information-first: no promotions, paid or otherwise.
The Cats of Context & Chaos
Permalink to “The Cats of Context & Chaos”
© 2026 Context and Chaos
About Context & Chaos
Permalink to “About Context & Chaos”Context & Chaos isn’t just a newsletter. It’s shared community space where practitioners, builders, and thinkers come together to share stories, lessons, and ideas about what truly matters in the world of data and AI: context engineering, governance, architecture, discovery, and the human side of doing meaningful work.
Our goal is simple, to create a space that cuts through the noise and celebrates the people behind the amazing things that are happening in the data & AI domain.
Whether you’re solving messy problems, experimenting with AI, or figuring out how to make data more human, Context & Chaos is your place to learn, reflect, and connect.
Got something on your mind? We’d love to hear from you. Hit Reply!
Share this article