Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

How to enhance Agent2Agent (A2A) security

August 19, 2025
Florencio Cano Gabarda
Related topics:
Artificial intelligence
Related products:
Red Hat AI

Share:

    Agent2Agent (A2A) protocol is an open standard created by Google for AI agents. An AI agent is a system or program that can autonomously perform tasks on behalf of a user or another system. It does this by designing its own workflow and utilizing available tools to achieve its goal.

    An agentic AI system is a software system composed of multiple agents, each one capable of performing a set of tasks. These agents coordinate themselves to accomplish specific tasks. A2A is a standard protocol to enable their communication.

    A2A enables agentic systems, where each agent can communicate with other agents. This makes it possible to swap one agent that performs the same task for another with no adaptation required. A2A is designed to let AI agents from different vendors communicate and work together seamlessly.

    Communication scenario

    Imagine an agentic AI system with two agents called agent A and agent B. In A2A, an agent can be either a client agent or a remote (or server) agent. A client agent is responsible for creating requests and handling end user interaction. A remote agent is responsible for taking action on these requests. Any agent can act as a client agent or a remote agent at any time, depending on the context.

    At some point, a user makes a request to agent A, acting as client agent.Agent A determines that it needs agent B to execute a task. To complete this request, agent A retrieves an artifact called an Agent Card from agent B, acting as a remote agent. Each agent has its own Agent Card stored and accessible by other agents at https://<DOMAIN>/.well-known/agent.json. 

    This Agent Card is a JSON file that contains: 

    • Agent B’s name
    • Its operations
    • The HTTP URL endpoint for agent communication
    • The specific skills that the agent offers
    • Any special capabilities
    • How to authenticate 

    With this information, agent A requests that agent B executes a task by sending a tasks/send message. If the task is simple and can be completed immediately, agent B executes it and responds synchronously. This communication flow is illustrated in Figure 1.

    Agent X retrieving Agent Y's Agent Card to learn about its capabilities and how to communicate with it. After the Agent Card retrieval, Agent X sends a message to Agent Y asking it to execute a task.
    Figure 1: Illustrating an agent-to-agent communication scenario.

    If the task requires more time, agent B responds with an acknowledgment and streams progress or status updates back to agent A using methods such as Server-Sent Events (SSE) or webhooks for push notifications.

    Authentication in A2A

    From an authentication and authorization point of view, A2A considers agents as standard enterprise applications. The primary identity of the agents is managed at the HTTP transport layer, not within payloads of messages exchanged by agents. Credentials are included by client agents in the appropriate HTTP header of each request to the remote agent.

    Each remote agent may have different authentication requirements. Client agents find these requirements on the remote agent's Agent Card.

    Credentials for a client agent to connect to a remote agent are obtained by the client agent through an out-of-band process outside the scope of the A2A protocol. Credentials to connect to a remote agent must not be published in the Agent Card.

    According to the specification, the server must validate the authentication of each request based on the HTTP header and individual requirements.

    In A2A flows, additional authentication may be necessary. For example, when a remote agent needs to authenticate to a third-party service on behalf of the client agent. In the case of a secondary authentication, credentials are also obtained by the client agent out-of-band, but in this case, they're passed in the body of the JSON-RPC 2.0 to the server.

    The specification recommends that client agents validate the identity of remote agents when they are starting a communication, during the TLS handshake. This validation should be done by validating the remote agent TLS certificate against trusted certificate authorities (CA).

    Authorization in A2A

    Once a client agent is authenticated on a remote agent, it is up to the remote agent to authorize the request or not. A2A does not define how the authorization must be performed. Without a clear definition, the system becomes vulnerable to potential security problems, including authorization creep. A2A outlines key aspects to consider, such as the specific skill requested, the actions attempted within the task, data access policies, and OAuth scopes, if applicable. It's crucial that A2A servers implement the principle of least privilege.

    Encrypted communications

    Production deployments must use HTTPS with modern TLS implementations, specifically TLS 1.3+ and strong cipher suites, including post-quantum cryptography (PQC) cipher suites as they become available.

    Message content security

    A2A requires the use of JSON-RPC 2.0 to exchange messages between agents, except in the SSE stream wrapper. This means that if an agent implements its own JSON-RPC 2.0 parser or, more likely, uses an existing one, then that component may include vulnerabilities. The security of the JSON-RPC 2.0 parser being used by an agent is critical, and must be kept up to date without known vulnerabilities.

    Task replay

    Some sources, like Habler et al. (2025), identify task replay as a security risk in A2A architectures. The following controls can be implemented to reduce this risk:

    • Include a unique nonce in the tasks/send request
    • Use timestamp verification
    • Use Message Authentication Codes (MAC)

    The best approach is using a combination of these three controls, but if that's too costly, then including a unique nonce in each request is a sound approach.

    Implementing HTTPS and mTLS between agents reduces the risk of a malicious party capturing and replaying messages.

    Agent Card security

    Agent Cards are the mechanism for agents to discover and gather information from other agents. Agent Cards are vital for the correct and secure execution of A2A agents.

    An Agent Card must be served through HTTPS using modern TLS configurations, specifically TLS 1.3+, with strong ciphers, including PQC cipher suites as they become available. Depending on its implementation, serving Agent Cards over HTTPS may be sufficient. However, signing an Agent Card can also provide for its authenticity and integrity (Habler et al., 2025).

    A2A servers may expose a lot of information about themselves in the Agent Card, including sensitive information. The specification establishes that the Agent Card endpoint must be protected by appropriate access controls such as authentication, mTLS, network restrictions, and authentication required to fetch the card. The specification also strongly advises against including credentials directly in the Agent Card and reinforces the recommendation of using out-of-band methods for credentials distribution.

    Serving the Agent Card from a non-default URI is not a security measure and is not recommended.

    Notifications security

    A2A can be configured to send webhook notifications to an URL. This URL must be carefully checked to avoid server-side request forgery (SSRF) vulnerabilities. In addition, the webhook must require that any A2A server trying to send a notification first authenticates. Even so, the A2A client must verify that the notification comes from a trusted A2A server and must validate that the notification is relevant. If it is not, it must be discarded.

    As with the rest of A2A communication, all webhook communications must use HTTPS.

    Cross-agent prompt injection

    Cross-agent prompt injection is a safety vulnerability where malicious instructions are embedded in content processed by interconnected AI agents causing one agent to pass or execute harmful commands in another agent's context.

    In A2A architecture, agents often collaborate by sharing information, delegating tasks, and passing outputs as inputs to other agents. This interconnectedness increases the risk and potential impact of prompt injection attacks.

    A2A does not include any specific security control against cross-agent prompt injection. This risk is reduced by security-in-depth controls like the ones mentioned above: TLS, trusted agents, authentication, authorization, and applying the principle of least privilege.

    A2A versus MCP

    If you're familiar with model context protocol (MCP), then it may seem that MCP and A2A seem to overlap. However, they are distinct protocols with different objectives. MCP defines how a user communicates with an LLM and tools to find an answer to a question. A2A defines how agents communicate. An agent can use tools, and when doing so with A2A, it might use MCP or another mechanism.

    Perspective on A2A security

    A2A is a protocol designed with security in mind. However, the security embedded and enforced within the protocol is often insufficient on its own. 

    If a software implements or uses A2A, then there are security risks that must be addressed by implementing more adequate controls. An implementation that ignores security and relies only on what the protocol natively provides can easily leave vulnerabilities that allow malicious actors to either exploit the agentic AI system for personal gain or to cause damage.

    Additionally, it is important to take into account that the protocol is in active development, so new modifications may improve its security or create new attack vectors. Therefore, developers implementing or using A2A must stay updated on changes to the A2A specification to keep developments secure.

    Last updated: August 20, 2025

    Related Posts

    • How to build a simple agentic AI server with MCP

    • Integrate Red Hat AI Inference Server & LangChain in agentic workflows

    • Build an AI agent to automate TechDocs in Red Hat Developer Hub

    • How Kafka improves agentic AI

    • A quick look at MCP with large language models and Node.js

    • Exploring Llama Stack with Python: Tool calling and agents

    Recent Posts

    • Cloud bursting with confidential containers on OpenShift

    • Reach native speed with MacOS llama.cpp container inference

    • A deep dive into Apache Kafka's KRaft protocol

    • Staying ahead of artificial intelligence threats

    • Strengthen privacy and security with encrypted DNS in RHEL

    What’s up next?

    Are you a developer looking to integrate artificial intelligence into your Node.js applications? Our AI and Node.js cheat sheet provides a practical overview of key concepts and tools to get you started.

    Get the cheat sheet
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue