What Is Google's A2A Protocol?

Emily Winks profile picture
Data Governance Expert
Updated:05/04/2026
|
Published:05/04/2026
17 min read

Key takeaways

  • Google launched A2A in April 2025 to enable agents from different vendors to discover, delegate, and coordinate tasks.
  • A2A and MCP are complementary: A2A handles agent coordination; MCP handles what agents can access from tools and data.
  • A2A uses HTTP, SSE, and JSON-RPC 2.0 — with Agent Cards for capability discovery and Tasks for work delegation.
  • A2A compliance alone is insufficient — agents still need a shared context layer to align definitions and business knowledge.

What is the Google A2A protocol?

The A2A (Agent-to-Agent) protocol is an open standard Google announced in April 2025 that enables AI agents built by different vendors to discover each other, delegate tasks, and coordinate work across enterprise systems. It uses HTTP, Server-Sent Events, and JSON-RPC 2.0 for transport, and Agent Cards for capability advertisement. A2A sits alongside MCP (which handles agent-to-tool access) to form the interoperability stack for multi-agent systems.

Key concepts

  • Agent Cards capability advertisement at /.well-known/agent-card.json
  • Task lifecycle submitted, working, input-required, completed, failed, canceled, rejected
  • Transport HTTP + SSE + JSON-RPC 2.0 — no new protocol layer required

Are your AI agents stuck in POC?

Assess Context Maturity

Build Your AI Context Stack

Get the blueprint for implementing context graphs across your enterprise. Four-layer architecture from metadata foundation to agent orchestration, with practical implementation steps for 2026.

Get the Stack Guide

What is the A2A protocol?

Permalink to “What is the A2A protocol?”

Google announced the Agent2Agent (A2A) protocol on April 9, 2025, at Google Cloud Next. It is an open specification that defines how autonomous AI agents from different vendors and frameworks can discover each other, delegate tasks, and coordinate work. Neither agent needs to expose its internal logic, memory, or implementation details.

Before A2A, enterprise AI teams building multi-agent systems faced a fragmentation problem: agents from one vendor could not reliably hand off work to agents from another vendor using a shared, open standard. Teams could write custom API integrations between specific agents, but every new agent pair required a new bespoke connector. Each framework had its own internal messaging format and task model, and there was no common discovery mechanism. A Salesforce agent could not delegate a sub-task to a ServiceNow agent without custom glue code; a Google Vertex agent could not coordinate with an AWS Bedrock agent without a hand-rolled bridge. A2A replaces that one-off integration effort with a single open standard.

A2A addresses this by defining three foundational things: a way for agents to advertise what they can do (Agent Cards), a structure for the work they exchange (Tasks), and a transport for sending that work over the wire (HTTP, SSE, JSON-RPC 2.0).

The protocol was contributed to the Linux Foundation in June 2025 and is maintained as an open-source project under the Apache 2.0 license. As of April 2026, more than 150 organizations support A2A, including Google, Microsoft, AWS, Salesforce, SAP, ServiceNow, Workday, and IBM. Azure AI Foundry, Amazon Bedrock AgentCore, and Google Cloud have all integrated A2A natively into their platform offerings.


How the A2A protocol works technically

Permalink to “How the A2A protocol works technically”

A2A defines a client-remote model: a client agent identifies a task it needs to delegate, locates an appropriate remote agent, and sends the task using the A2A protocol. The remote agent accepts the task, executes it, and returns results (called artifacts) along with status updates.

Agent Cards

Permalink to “Agent Cards”

Every A2A-compliant agent publishes an Agent Card at the well-known URL /.well-known/agent-card.json. The Agent Card is a JSON document that describes:

  • Name and description: what the agent does, in human-readable terms
  • Version and service endpoint: where to send A2A requests
  • Supported modalities: text, structured data, files, audio, video
  • Authentication requirements: API key, OAuth 2.0, or OpenID Connect
  • Capability flags: whether the agent supports streaming or push notifications

When a client agent needs to delegate work, it reads the target agent’s Agent Card to determine whether that agent can handle the task and how to communicate with it. This is the discovery layer, analogous to reading an API’s OpenAPI spec before making a call.

Task lifecycle

Permalink to “Task lifecycle”

The Task is the fundamental unit of work in A2A. Every task has a unique ID and progresses through a defined set of states:

  • Submitted: the client has sent the task to the remote agent
  • Working: the remote agent is actively processing
  • Input-required: the agent needs additional information from the client before proceeding
  • Completed: the task finished successfully; artifacts (outputs) are attached
  • Failed: the task ended with an error
  • Canceled: the client canceled the task before completion
  • Rejected: the remote agent refused the task (e.g., outside its scope)

For short tasks, the entire lifecycle completes in a single synchronous response. For long-running tasks (a complex data analysis, a multi-step compliance check, a report generation workflow), A2A supports streaming via Server-Sent Events (SSE), so the client receives status updates in real time without polling.

Transport and security

Permalink to “Transport and security”

A2A is intentionally built on existing web standards rather than a new transport layer:

  • JSON-RPC 2.0 over HTTPS handles all core task communications
  • Server-Sent Events (SSE) enable streaming for long-running tasks
  • Push notifications support fully asynchronous workflows where the client does not stay connected

Authentication aligns with OpenAPI security schemes: API keys, OAuth 2.0, and OpenID Connect Discovery. This means A2A integrates with existing enterprise identity infrastructure. No new auth system is required.

A2A vs MCP: different layers, complementary roles

Permalink to “A2A vs MCP: different layers, complementary roles”

A2A and MCP are not competing standards. They solve different problems at different layers of the multi-agent stack, and production systems use both.

MCP (Model Context Protocol) was introduced by Anthropic in November 2024. It is a vertical protocol: it defines how a single agent connects to external tools and data sources. When an agent needs to query a database, call an API, retrieve documents, or access a governed metadata catalog, MCP is the mechanism. Think of it as the agent’s plug into the outside world.

A2A is a horizontal protocol: it defines how one agent delegates work to another agent. When a high-level orchestrator agent needs a specialized agent to handle invoice processing, compliance checking, or data enrichment, A2A is the mechanism. Think of it as the coordination layer between agents.

In practice, they work in sequence. An orchestrator agent uses A2A to route a task to a specialized remote agent. That specialized agent then uses MCP to pull the context it needs from systems like Atlan before completing the work and returning artifacts. PayPal’s production deployment illustrates this precisely: the A2A handshake routes from a sales agent to a PayPal-provided agent; that PayPal agent then uses an MCP client to invoke the actual payment tools.

The key distinction:

  • MCP answers: “What data and tools can this agent access?”
  • A2A answers: “Which agent should handle this task, and how do we coordinate?”

Both questions must be answered for a multi-agent system to work reliably in production. This is why Google explicitly described A2A as complementary to MCP when it launched the protocol, and why leading enterprise implementations use both together. For a deeper look at the full interoperability landscape, see agent interoperability protocols explained.


Who supports A2A

Permalink to “Who supports A2A”

A2A launched in April 2025 with more than 50 technology partners. By April 2026, that number had grown to more than 150 organizations.

Major cloud platforms:

  • Google Cloud: native A2A support across Vertex AI and Agentspace
  • Microsoft Azure: A2A integrated into Azure AI Foundry and Copilot Studio
  • Amazon Web Services: A2A support through Amazon Bedrock AgentCore Runtime

Enterprise SaaS:

  • Salesforce: extending Agentforce with A2A support to enable cross-ecosystem agent coordination
  • SAP: A2A integration in business process automation agents
  • ServiceNow: partnered with Google Cloud to set the industry standard for IT operations agent interoperability
  • Workday: supporting A2A for HR and finance agent workflows
  • UKG: workforce management agent integration

Developer ecosystem:

  • LangChain, MongoDB, Cohere, Box, Atlassian, Intuit, PayPal

Consulting and implementation partners:

  • Accenture, BCG, Capgemini, Deloitte, McKinsey, PwC, TCS, Wipro, Cognizant, HCLTech

The A2A GitHub repository has surpassed 22,000 stars, and the SDK ecosystem has grown from a single Python implementation to five production-ready languages: Python, JavaScript, Java, Go, and .NET. As of April 2026, the protocol is in active production use at more than 150 organizations.


A2A use cases in enterprise multi-agent systems

Permalink to “A2A use cases in enterprise multi-agent systems”

A2A is most valuable when an enterprise runs agents from multiple vendors and needs them to collaborate on tasks that cross system boundaries.

Cross-platform task delegation

Permalink to “Cross-platform task delegation”

A high-level orchestrator agent breaks a complex request into sub-tasks and delegates each to the most capable specialized agent, regardless of which vendor built it. A Salesforce CRM agent can route a support escalation to a ServiceNow ITSM agent. A Google Workspace agent can delegate document summarization to a specialized LLM agent on Azure. A2A handles the routing and handoff; neither agent needs to know how the other is built.

Financial services workflows

Permalink to “Financial services workflows”

PayPal deployed A2A in production for merchant-facing workflows: a sales agent receives a natural-language request, uses A2A to locate and authenticate a PayPal-provided payment agent via its Agent Card, and delegates invoice creation. The payment agent then uses MCP to call the underlying PayPal tools. The entire workflow is auditable and crosses vendor boundaries without custom integration code.

Supply chain coordination

Permalink to “Supply chain coordination”

A procurement agent uses A2A to coordinate with inventory, shipping, compliance, and finance agents spread across different vendor systems. Each agent reports task status back through A2A’s task lifecycle, giving the orchestrator real-time visibility into a distributed workflow without any agent exposing its internal state.

IT operations

Permalink to “IT operations”

A ticket triage agent receives an alert and uses A2A to route it to the most appropriate remediation agent (networking, security, database, or application) based on the content of the incident. Long-running remediation tasks report status updates back via SSE so the triage agent can monitor progress and escalate if the task enters an input-required state.

HR and workforce workflows

Permalink to “HR and workforce workflows”

A Workday HR agent coordinates with a Salesforce CRM agent during employee onboarding: creating accounts, provisioning access, and syncing records across systems. A2A provides the structured task delegation layer that makes this cross-system coordination reliable and auditable.

For a broader view of how these patterns fit into the overall architecture of multi-agent system orchestration, the orchestration layer and the protocol layer work together.

Inside Atlan AI Labs & The 5x Accuracy Factor

Learn how context engineering drove 5x AI accuracy in real customer systems. Explore real experiments, quantifiable results, and a repeatable playbook for closing the gap between AI demos and production-ready systems.

Download E-Book

Why a shared context layer is required for A2A to work

Permalink to “Why a shared context layer is required for A2A to work”

A2A solves the communication problem between agents. It does not solve the knowledge problem.

Imagine a procurement agent and a compliance agent, both A2A-compliant, coordinating on a vendor approval workflow. The procurement agent’s definition of “approved vendor” is sourced from a Salesforce custom field maintained by the procurement team. The compliance agent’s definition of “approved vendor” pulls from a separate risk register maintained by the legal team. Both agents complete their tasks using A2A. Both return “approved.” But they are reasoning from different, inconsistent definitions of the same concept.

This is not a protocol failure. It is a context failure. A2A routes tasks correctly; it cannot ensure the agents agree on what the underlying business terms mean. Protocols could theoretically be extended with semantic governance features, but today’s A2A and MCP specifications do not include mechanisms for enforcing shared enterprise ontologies, data lineage, or governance policies. Those concerns sit outside the protocol layer by design; they are governed context problems, not communication problems.

This is the gap that a governed context layer fills. A shared context layer provides three things that A2A and MCP cannot provide on their own:

1. Shared definitions across agents. A unified semantic layer covering business glossary, data lineage, quality scores, ownership, and governance policies that every agent reads from, regardless of which vendor built it or which protocol it uses. When every agent queries the same governed definition of “active customer” or “revenue,” contradictions disappear at the source.

2. Shared memory and state. A persistent store for agent context: session logs from A2A interactions, long-term memory, artifacts from prior tasks, and on-demand context retrieval via MCP or RAG. Agents in a multi-step workflow can pick up where the previous agent left off because the context is stored in a shared, governed layer, not in any single agent’s local memory.

3. Shared transport across all protocols. The same governed context is accessible through A2A, MCP, OSI, SQL, and REST APIs. Agents do not need bespoke integrations to pull context from the shared layer. Whatever protocol the agent speaks, the context layer responds in kind.

Atlan’s Context Lakehouse is built on Apache Iceberg with native graph and vector search. It speaks every protocol: MCP and A2A natively, plus SQL and REST/Graph APIs, making it the governed substrate that multi-agent systems can build on without vendor lock-in. Atlan is also an OSI (Open Semantic Interchange) partner, meaning Atlan’s governed semantic definitions are exposed in the OSI format that the broader enterprise data ecosystem (including Snowflake, Salesforce, dbt Labs, and ThoughtSpot) recognizes.

The positioning is straightforward: “A2A and MCP standardize communication; Atlan standardizes what those agents know about your business.”

To understand how this fits into the broader architecture of context for AI agents, see context architecture for AI agents. For how Atlan’s MCP server specifically connects governed context to AI tools, see what is Atlan MCP.

The CIO's Guide to Context Graphs

Discover the key strategies that CIOs are using to implement context layers and scale AI.

Get the Guide

Real stories from real customers: making agent interoperability production-ready

Permalink to “Real stories from real customers: making agent interoperability production-ready”

The enterprises that have moved beyond AI pilots share a common pattern: they invested in the context layer before scaling agent coordination. Here is what two of them have said about the role of governed context in making AI agents production-ready.

"We're excited to build the future of AI governance with Atlan. All of the work that we did to get to a shared language at Workday can be leveraged by AI via Atlan's MCP server…as part of Atlan's AI Labs, we're co-building the semantic layer that AI needs with new constructs, like context products."

Joe DosSantos, VP of Enterprise Data & Analytics, Workday

"Atlan is much more than a catalog of catalogs. It's more of a context operating system…Atlan enabled us to easily activate metadata for everything from discovery in the marketplace to AI governance to data quality to an MCP server delivering context to AI models."

Sridher Arumugham, Chief Data & Analytics Officer, DigiKey


Why the context layer is the missing piece for A2A to work

Permalink to “Why the context layer is the missing piece for A2A to work”

The enterprise AI landscape in 2026 is converging on a three-layer stack: A2A for agent coordination, MCP for agent-to-tool access, and a shared context layer for governed business knowledge. Each layer is necessary. None of the three is sufficient on its own. Teams can implement the context layer with open-source tooling (a custom knowledge graph, a semantic layer, or a metadata store), but the requirement for a governed, shared context source is structural, not a product recommendation. Without it, agents coordinating via A2A will still draw from inconsistent definitions and produce contradictory results.

A2A gives enterprises the routing infrastructure to let agents from different vendors collaborate. MCP gives those agents the tool access they need to do real work. But both protocols are transport mechanisms. They move tasks and context around; they do not create governed, consistent context. That is the job of the context layer.

The enterprises moving fastest in production multi-agent AI are the ones that invested in the context layer first. Workday built a shared semantic language across its data estate and is now surfacing it to AI agents via Atlan’s MCP server. DigiKey deployed Atlan as a context operating system, feeding consistent, governed metadata to agents, MCP tools, and AI models simultaneously. Both organizations treated the context layer as infrastructure, not an afterthought.

For teams building on A2A today, the practical implication is clear: the protocol will route your tasks. It will not tell your agents what “customer,” “revenue,” “approved,” or “compliant” means in your business. That definition work has to happen in a governed context layer, and it has to happen before you scale.


Frequently asked questions about the A2A protocol

Permalink to “Frequently asked questions about the A2A protocol”
  1. What does A2A stand for in AI?

    A2A stands for Agent-to-Agent. The A2A protocol is an open specification developed by Google that defines how autonomous AI agents from different vendors and frameworks can discover each other, delegate tasks, and coordinate work without exposing their internal implementation. It was announced in April 2025 and is now maintained by the Linux Foundation.

  2. When was the A2A protocol announced?

    Google announced the A2A protocol on April 9, 2025, at Google Cloud Next. The specification was released as open source under the Apache 2.0 license and was donated to the Linux Foundation in June 2025 for neutral governance.

  3. Is A2A the same as MCP?

    No. A2A and MCP are different protocols that solve different problems. A2A handles agent-to-agent coordination: how one agent delegates work to another agent across vendor boundaries. MCP (Model Context Protocol, from Anthropic) handles agent-to-tool access: how a single agent retrieves context from external data systems and tools. Production multi-agent systems typically use both: A2A routes the task to the right agent; MCP gives that agent the context it needs to execute the task.

  4. What is an Agent Card in A2A?

    An Agent Card is a JSON document that every A2A-compliant agent publishes at the well-known URL /.well-known/agent-card.json. It describes the agent’s name, description, version, service endpoint URL, supported modalities, authentication requirements, and capability flags such as streaming and push notifications. Client agents read the Agent Card to discover what a remote agent can do and how to communicate with it.

  5. What task states does A2A define?

    A2A defines seven task states: submitted (received by the remote agent), working (actively processing), input-required (agent needs more information from the client), completed (finished with artifacts), failed (ended with an error), canceled (client canceled before completion), and rejected (agent refused the task). Terminal states are completed, failed, canceled, and rejected.

  6. What companies support the A2A protocol?

    Over 150 organizations support A2A as of 2026. Major supporters include Google, Microsoft, AWS, Salesforce, SAP, ServiceNow, Workday, IBM, Cisco, PayPal, LangChain, MongoDB, and Cohere. Azure AI Foundry, Amazon Bedrock AgentCore, and Google Cloud have all integrated A2A natively into their platform offerings.

  7. Why do A2A-compliant agents still need a shared context layer?

    A2A standardizes how agents communicate and delegate tasks. It does not govern what agents know. If agents from different vendors hold different definitions of “approved vendor,” “active customer,” or “revenue,” they will produce contradictory outputs even when communicating via A2A. A shared context layer (a governed store of semantic definitions, data lineage, and business policies) ensures all agents reason from the same knowledge base, preventing contradictions at the source.

  8. What is OSI and how does it relate to A2A?

    OSI (Open Semantic Interchange) is an open standard for sharing semantic metadata (datasets, metrics, dimensions, relationships, and contextual metadata) across tools and platforms. The v1.0 spec was released in January 2026. Partners include Snowflake, Salesforce, dbt Labs, ThoughtSpot, and Atlan. While A2A standardizes how agents coordinate tasks, OSI standardizes the semantic definitions those agents reason over. Atlan is an OSI partner, meaning its governed context layer exposes semantic definitions in the OSI format that any A2A or MCP-enabled agent can consume.


Sources

Permalink to “Sources”

Share this article

signoff-panel-logo

Atlan is the next-generation platform for data and AI governance. It is a control plane that stitches together a business's disparate data infrastructure, cataloging and enriching data with business context and security.

Bridge the context gap.
Ship AI that works.

[Website env: production]