Build Your AI Context Stack
Get the blueprint for implementing context graphs across your enterprise. Four-layer architecture from metadata foundation to agent orchestration, with practical implementation steps for 2026.
Get the Stack GuideWhy agent interoperability matters
Permalink to “Why agent interoperability matters”Enterprise AI teams are no longer building single agents. They are building systems of agents: specialized models that handle SQL generation, data quality monitoring, workflow orchestration, business reporting, and anomaly detection, each running in parallel or in sequence.
The problem: these agents are built on different frameworks, by different teams, using different vendors. An analytics agent built on LangChain needs to hand data to a governance agent built on LlamaIndex. A workflow orchestrator running on Microsoft Copilot Studio needs to delegate a subtask to an on-premise agent running on a private LLM. Without shared protocols, every one of these connections requires custom integration code.
The result is brittle, expensive, and slow. According to research published in 2025, production deployments of multi-agent LLM systems exhibit failure rates between 41% and 86.7%, with roughly 79% of those failures originating from coordination and specification issues, not from any limitation in the underlying models.
Gartner projects that 40% of enterprise applications will feature task-specific AI agents by 2026, up from less than 5% in 2025. As agent counts multiply, the integration surface area grows faster than any team can hand-code it. That is the problem agent interoperability protocols exist to solve.
The protocol stack: three layers explained
Permalink to “The protocol stack: three layers explained”Agent interoperability is not a single problem. It is three distinct problems that require three distinct solutions, each operating at a different layer of the stack.
Here is how each layer works and what it solves.
MCP deep dive: how it works and what it solves
Permalink to “MCP deep dive: how it works and what it solves”The problem MCP solves
Permalink to “The problem MCP solves”Before MCP, every AI agent needed a custom integration for every external tool or data source it used. An analytics agent connecting to Snowflake, Salesforce, a REST API, and an internal knowledge base required four separate connectors, each written differently, maintained separately, and breaking whenever the underlying tool changed. The integration surface area grew quadratically as agents and tools multiplied.
How MCP works
Permalink to “How MCP works”MCP, introduced by Anthropic in November 2024, is an open standard built on JSON-RPC 2.0. It creates a three-component architecture:
- MCP Host: the AI application the user interacts with (Claude, Cursor, a custom enterprise app)
- MCP Client: the translation layer inside the host that speaks MCP to servers
- MCP Server: the external system that exposes capabilities to agents (a database, a REST API, a catalog, a code repository)
MCP defines three core primitives: Tools (actions an agent can invoke), Resources (data an agent can read), and Prompts (templated instructions an agent can execute). Any tool that publishes an MCP Server can immediately be accessed by any MCP-compatible agent, with no custom integration required.
The practical result: average time to wire a new SaaS tool into an AI agent dropped from 18 hours of custom function-calling code to 4.2 hours with MCP (per MCP adoption data, Digital Applied, 2026).
MCP adoption today
Permalink to “MCP adoption today”MCP was donated to the Linux Foundation’s Agentic AI Foundation (AAIF) in December 2025, alongside OpenAI and Google as co-founders. By March 2026, MCP had reached 97 million monthly SDK downloads, comparable scale to the React npm package but achieved in 16 months instead of three years. Every major AI provider now supports MCP natively: Anthropic, OpenAI, Google, Microsoft, AWS, Cursor, and JetBrains.
For a detailed look at how Atlan’s MCP Server delivers governed metadata context to agents, see How Atlan MCP Server Builds Context For Your AI Tools.
A2A deep dive: how it works and what it solves
Permalink to “A2A deep dive: how it works and what it solves”The problem A2A solves
Permalink to “The problem A2A solves”MCP solved how agents access tools. It left open a different problem: how does one agent hand a task to another agent? In a multi-agent system, an orchestrator agent might need to delegate a data validation task to a specialist agent, get a result, then pass it forward to a reporting agent. Without a standard, this delegation is custom code that is fragile, framework-specific, and hard to scale.
How A2A works
Permalink to “How A2A works”Google released the Agent-to-Agent (A2A) protocol in April 2025, built on HTTP, Server-Sent Events (SSE), and JSON-RPC, all standards already embedded in enterprise IT stacks. A2A enables four capabilities:
-
Capability discovery: agents advertise what they can do via “Agent Cards,” JSON files published at a well-known URL. An Agent Card includes the agent’s name, description, service endpoint, supported modalities, and authentication requirements. Think of it as a machine-readable resume an agent publishes so other agents can find it.
-
Task management: A2A defines a standard task lifecycle (submitted, working, input-required, completed, failed, canceled), so client agents can track task state across the full execution cycle.
-
Agent-to-agent collaboration: a client agent identifies the right remote agent via its Agent Card, delegates a task using A2A, and receives results regardless of what framework each agent is built on.
-
UX negotiation: agents communicate what response formats and modalities they support, adapting to different consumption contexts.
A2A was donated to the Linux Foundation (AAIF) in June 2025. More than 50 technology partners adopted it at launch, including Atlassian, Salesforce, SAP, ServiceNow, and Workday.
For a complete breakdown, see What Is Google’s A2A Protocol?
MCP and A2A together
Permalink to “MCP and A2A together”MCP and A2A solve adjacent, non-overlapping problems. MCP is how an agent connects to a tool (Layer 1). A2A is how an agent delegates to another agent (Layer 2). In practice:
- A workflow orchestrator agent uses A2A to delegate a data validation task to a specialist agent
- The specialist agent uses MCP to pull data from Snowflake, check quality rules, and return results
- Results flow back through A2A to the orchestrator
Most production multi-agent systems will use both.
OSI and semantic interoperability
Permalink to “OSI and semantic interoperability”Inside Atlan AI Labs & The 5x Accuracy Factor
Learn how context engineering drove 5x AI accuracy in real customer systems. Explore real experiments, quantifiable results, and a repeatable playbook for closing the gap between AI demos and production-ready systems.
Download E-BookThe semantic problem MCP and A2A leave unsolved
Permalink to “The semantic problem MCP and A2A leave unsolved”MCP and A2A solve the communication mechanics of agent interaction. They do not solve what agents mean by the terms they exchange.
Consider a multi-agent financial system where one agent handles revenue forecasting and another handles customer health scoring. Both agents are A2A-compliant. Both use MCP to access the same Salesforce instance. But the revenue forecasting agent was trained on a definition of “ARR” that excludes professional services, while the health scoring agent uses a definition that includes it. When these agents exchange data about the same customer, they generate contradictory signals: both “correct” within their own context, but incompatible at the system level.
This is called Semantic Intent Divergence. Research on enterprise multi-agent deployments identifies it as a primary unaddressed root cause of agent failure in production. As one arXiv study on multi-agent architectures describes the failure mode: “two agents can take entirely different actions on different resources and still be in logical contradiction because their actions are incompatible at the intent level” (arXiv:2604.16339).
How OSI addresses it
Permalink to “How OSI addresses it”The Open Semantic Interchange (OSI), launched in September 2025 by Snowflake, Salesforce, and dbt Labs, creates a vendor-neutral standard for expressing and sharing semantic definitions. OSI uses a declarative YAML format (built on dbt Labs’ MetricFlow framework) to define semantic models as containers that include datasets, relationships, measures, dimensions, and contextual metadata.
The goal: if an agent on Snowflake Intelligence and an agent on a custom LangChain stack both consume an OSI-compliant semantic model for “ARR,” they use the same definition with no custom translation between them.
The OSI specification was finalized in January 2026 and published under the Apache 2.0 license. Atlan is a named launch partner, exposing its governed semantic context through the OSI interface so agents on any platform can consume the same definitions without custom plumbing.
Why protocols alone aren’t enough
Permalink to “Why protocols alone aren’t enough”The protocols described above are necessary infrastructure. They are not designed to be sufficient on their own, and each protocol’s specification is explicit about its scope. MCP handles tool access. A2A handles agent delegation. OSI handles semantic interchange. None claims to be an enterprise governance layer. That gap is real, and it is what causes multi-agent systems to fail in production even after teams have implemented all three.
Even with MCP, A2A, and OSI in place, multi-agent systems still fail when the underlying context is untrustworthy, ungoverned, or inconsistent. Here is why:
OSI provides interchange, not enforcement. This is by design. OSI’s explicit scope is standardizing how semantic definitions travel between systems, not governing whether those definitions are accurate or policy-compliant. That scope boundary matters in practice. An agent consuming an OSI-compliant “revenue” definition still needs to know: whose definition is this? When was it last reviewed? Which systems is it verified against? OSI carries the definition; it does not certify it.
Protocols have no memory. MCP and A2A are stateless communication standards. They do not track what an agent did, what data it accessed, or how its outputs were used. Enterprise AI governance requires audit trails, policy enforcement, and lineage. None of these are native to any current protocol.
Protocol versions drift. MCP is now on its 2025-11-25 specification; A2A has already received upgrades since its April 2025 launch. Enterprise multi-agent architectures built tightly to a specific protocol version become fragile as standards evolve. The underlying context layer needs to be protocol-agnostic to remain durable.
The insight that follows from this: the protocols determine the grammar of agent communication. The context layer determines whether agents are saying something true.
The CIO's Guide to Context Graphs
Discover the key strategies that CIOs are using to implement context layers and scale AI.
Get the GuideHow the context layer enables true agent interoperability
Permalink to “How the context layer enables true agent interoperability”The missing layer in every protocol stack diagram is the one that sits beneath all three: a governed, enterprise-specific context layer that every agent (regardless of framework, vendor, or protocol version) can trust as the source of truth.
This is what Atlan’s Context Lakehouse provides. It is the layer that makes protocols meaningful rather than merely syntactic.
What a context layer does that protocols don’t
Permalink to “What a context layer does that protocols don’t”A governed context layer provides:
-
Shared definitions: business terms, metrics, and KPIs with approved definitions, owners, and review dates. When two agents query “churn rate,” they get the same governed definition, not whatever their individual training encoded.
-
Lineage: the full data lineage graph, from source system to transformation to consumption. An agent can verify not just what a number is, but where it came from and whether the pipeline producing it is healthy.
-
Policies: access controls, data classifications, and compliance rules enforced at the context layer and not left to individual agents to interpret independently.
-
Memory: bidirectional writes. Agents not only consume context; they post observations, quality signals, and anomaly flags back into the context layer, so the shared knowledge base improves over time.
For a detailed architectural breakdown, see Context Architecture for AI Agents.
How Atlan implements this
Permalink to “How Atlan implements this”Atlan’s approach is protocol-agnostic by design. The Context Lakehouse supports MCP, A2A, and OSI simultaneously, not because it prioritizes any one standard, but because the context it exposes is built to outlive whichever protocol wins in any given year.
Practically, this means:
-
Atlan MCP Server: delivers lineage, ownership, glossary definitions, and quality metrics directly from the Metadata Lakehouse via the MCP protocol. Any MCP-compatible agent accesses governed metadata without custom integration.
-
Bidirectional A2A writes: the Context Lakehouse supports agents posting observations and quality signals back via A2A, not just consuming data. Context becomes a living record of what agents have done and found.
-
OSI partnership: Atlan exposes semantic context through the OSI interface so agents on Snowflake, dbt, Salesforce, or any OSI-compatible platform consume the same business definitions from a single governed source.
-
Context Repos: versioned, deployable context packages that any agent framework can mount. These are the “shared brain” across orchestrators, portable, versioned, and not tied to any single protocol implementation.
The underlying Metadata Lakehouse is built on open Apache Iceberg tables, meaning context is sovereign: it lives in your storage, under your governance policies, accessible to any protocol or framework that speaks standard table formats. There is no vendor lock-in to a protocol implementation.
As Atlan’s position states: “Your context layer should outlive any protocol.”
Real stories from real customers: governed context enabling agent interoperability
Permalink to “Real stories from real customers: governed context enabling agent interoperability”"We're excited to build the future of AI governance with Atlan. All of the work that we did to get to a shared language at Workday can be leveraged by AI via Atlan's MCP server…as part of Atlan's AI Labs, we're co-building the semantic layer that AI needs with new constructs, like context products."
— Joe DosSantos, VP of Enterprise Data & Analytics, Workday
"Atlan is much more than a catalog of catalogs. It's more of a context operating system…Atlan enabled us to easily activate metadata for everything from discovery in the marketplace to AI governance to data quality to an MCP server delivering context to AI models."
— Sridher Arumugham, Chief Data & Analytics Officer, DigiKey
Protocols make agents talk. The context layer makes them understand each other.
Permalink to “Protocols make agents talk. The context layer makes them understand each other.”The agent interoperability landscape in 2026 has more standards than any team can implement independently. MCP, A2A, OSI, ANP, ACP: each addresses a real problem. Each is necessary. None is sufficient on its own.
The enterprise teams making progress in production are those that have recognized this pattern: protocol compliance gets agents to the starting line. Shared, governed context is what keeps them running in the same direction.
Whether you are evaluating MCP for tool access, piloting A2A for multi-agent workflows, or assessing OSI for semantic consistency, the architectural decision beneath all of it is the same: what is the context layer that agents will share, who owns it, how is it governed, and does it work regardless of which protocol version is current next year?
For a full picture of how context architecture supports multi-agent system orchestration at scale, and how the Context Lakehouse serves as the protocol-agnostic layer beneath any agent stack, those resources go deeper on implementation specifics.
Frequently asked questions
Permalink to “Frequently asked questions”1. What is the difference between MCP and A2A?
MCP (Model Context Protocol) and A2A (Agent-to-Agent Protocol) solve problems at different layers of the agent stack. MCP standardizes how a single agent connects to external tools, APIs, and data sources (it is an agent-to-tool protocol). A2A standardizes how one AI agent discovers, communicates with, and delegates tasks to another AI agent (it is an agent-to-agent protocol). Most production multi-agent systems use both: MCP to access tools, A2A to coordinate between agents.
2. What is the Open Semantic Interchange (OSI) and why does it matter for AI agents?
OSI is an open-source standard, launched in September 2025 by Snowflake, Salesforce, and dbt Labs, that defines a vendor-neutral format for expressing and sharing semantic definitions: what “revenue,” “active users,” or “churn rate” mean in a specific business context. It matters for AI agents because MCP and A2A handle communication mechanics but not meaning. Without a shared semantic standard, two protocol-compliant agents can still produce contradictory outputs if they use different definitions of the same business term.
3. Are MCP, A2A, and OSI competing protocols?
No. They operate at different layers of the stack and are complementary. MCP handles agent-to-tool access (Layer 1). A2A handles agent-to-agent task delegation (Layer 2). OSI handles semantic agreement on shared business definitions (Layer 3). An enterprise multi-agent system typically needs all three layers to function reliably in production.
4. What is “Semantic Intent Divergence” and how does it affect multi-agent systems?
Semantic Intent Divergence is the phenomenon where cooperating AI agents develop inconsistent interpretations of shared objectives because each reasons within its own context window, knowledge base, and prompt framing. It is documented as a primary root cause of multi-agent failure in production: research shows 41–86.7% failure rates in deployed multi-agent systems, with ~79% of failures originating from coordination failures rather than model limitations. Even agents that are MCP and A2A compliant can suffer from Semantic Intent Divergence if they lack a shared, governed context layer.
5. What is the Agent Network Protocol (ANP) and how does it differ from A2A?
ANP (Agent Network Protocol) operates at a fourth layer focused on decentralized, open-internet agent discovery and identity. Where A2A handles structured task delegation within known enterprise agent ecosystems, ANP addresses how agents find and authenticate each other across open, distributed systems outside organizational firewalls. ANP uses a three-layer system: identity and encrypted communication, meta-protocol negotiation, and application protocol. It is positioned for open agent marketplaces and decentralized agent ecosystems, complementing rather than replacing A2A.
6. Why is a context layer needed if protocols already enable agent communication?
Protocols solve the syntax of agent communication: how messages are formatted, how agents discover each other, how tasks are handed off. They do not solve the semantics, specifically whether agents are operating on accurate, consistent, governed definitions of the data they are exchanging. A governed context layer provides: shared business term definitions with versioning and ownership, full data lineage so agents can verify data provenance, access policies enforced centrally rather than interpreted individually by each agent, and memory of past agent actions. Without these, protocol-compliant agents can still produce contradictory or ungoverned outputs.
7. How does Atlan support multiple agent interoperability protocols?
Atlan’s Context Lakehouse is protocol-agnostic. It provides an MCP Server for agent-to-tool access (any MCP-compatible agent accesses Atlan’s governed metadata without custom integration), supports bidirectional A2A writes (agents post observations and quality signals back into the shared context), and is an OSI launch partner (exposing semantic context through the OSI interface for cross-platform semantic consistency). The underlying Metadata Lakehouse is built on open Apache Iceberg, so context is sovereign and portable, independent of any specific protocol version.
8. What does “sovereign context on open Iceberg” mean in practice?
It means the governed metadata that agents rely on (definitions, lineage, policies, quality signals) lives in your own storage on open Apache Iceberg tables, not in a vendor-controlled proprietary format. Any agent framework that speaks standard table formats can read it. Any protocol that needs to access it can do so without a vendor’s proprietary API as a gatekeeper. The practical implication: as protocols evolve (MCP, A2A, and OSI have all already received updates since their initial releases), your context layer does not need to be rebuilt. It remains accessible regardless of which protocol version is current.
Sources
Permalink to “Sources”- Announcing the Agent2Agent Protocol (A2A), Google Developers Blog
- Introducing the Model Context Protocol, Anthropic
- A Survey of Agent Interoperability Protocols: MCP, ACP, A2A, and ANP, arXiv 2505.02279
- Open Semantic Interchange (OSI) Specification Finalized, Snowflake
- Atlan Joins Snowflake and Industry Leaders to Launch the Open Semantic Interchange
- MCP Hits 97M Downloads: Model Context Protocol Guide, Digital Applied
- Gartner Predicts 40% of Enterprise Apps Will Feature Task-Specific AI Agents by 2026
- The Orchestration of Multi-Agent Systems: Architectures, Protocols, and Enterprise Adoption, arXiv
- What the Open Semantic Interchange (OSI) spec means for metrics, semantics, and AI, dbt Labs
- What Is Agent2Agent (A2A) Protocol?, IBM
Share this article
