Tutorial: Multi-Agent Collaboration with LangChain, MCP, and Google A2A Protocol
Artificial Intelligence has moved beyond single, monolithic models trying to handle every request. Today, AI systems are becoming agent-based: collections of specialized agents that can work together, communicate, and call external tools to complete complex tasks. If you’ve ever wished your AI assistant could not only “think” but also collaborate and act, you’re already thinking in the world of multi-agent systems.
In this tutorial, we’ll explore three key technologies that make this possible:
LangChain — a Python framework that simplifies building AI agents and connecting them with external tools, memory, and workflows.
Model Context Protocol (MCP) — an open standard that gives agents a consistent way to call external services and tools, like calculators, databases, or APIs.
Agent-to-Agent (A2A) Protocol — a standard that allows different agents (even built by different teams or running on different platforms) to communicate, discover each other, and share tasks.
By the end of this guide, you’ll not only understand what each of these technologies does, but you’ll also build a working Python project that:
Creates a simple LangChain agent.
Equips it with MCP tools for extended capabilities.
Connect it with another agent through A2A, so they can collaborate on solving tasks.
We’ll go step by step, covering environment setup, installation, code examples, and troubleshooting tips. All you need is some basic Python knowledge, curiosity, and a willingness to experiment.
Let’s dive in and build your first collaborative AI system!
What Are MCP, LangChain, and Agent2Agent?
Before jumping into code, let’s make sure we understand the three building blocks of our project. Each solves a different problem, but together they form the foundation of modern multi-agent AI systems.
LangChain: Building Smarter Agents
LangChain is a popular open-source Python framework for working with Large Language Models (LLMs). Instead of having to write boilerplate code to prompt models, connect APIs, and handle workflows, LangChain gives you ready-made components:
Agents — AI decision-makers that can reason about tasks.
Tools — Functions or APIs the agent can call (like calculators, search engines, or databases).
Chains — Workflows that connect multiple steps of reasoning.
Memory — A way for agents to remember past interactions.
With LangChain, you can quickly create an agent that not only “talks” but also acts by calling external tools.
MCP: Giving Agents Tools They Can Trust
The Model Context Protocol (MCP) is an open standard that makes it easy for agents to connect to external tools. Think of it like an “app store” for agents, if an agent knows MCP, it can plug into a library of services without custom coding.
Instead of hardcoding integrations, you just run an MCP server that exposes functions (like “get_weather” or “add_numbers”), and any MCP-aware agent can use them. This gives agents superpowers while keeping the design modular and reusable.
Agent-to-Agent (A2A): Teaching Agents to Collaborate
While MCP is about connecting agents to tools, the Agent-to-Agent Protocol (A2A) is about connecting agents to each other. Developed by Google, A2A provides a standard way for agents to:
Introduce themselves (using an “agent card” that describes what they can do).
Send tasks to one another in a structured format.
Share progress and results in real time.
This means you can have specialized agents, like a “Math Agent” and a “Spelling Agent”, work together to solve a problem that neither could do alone.
With these three pieces, you can create a system where:
LangChain gives your agent reasoning skills.
MCP equips it with tools.
A2A lets it collaborate with other agents.
Next, let’s set up your Python environment and install the necessary packages.
Setting Up Your Environment
Now that we understand what each technology does, let’s roll up our sleeves and get our environment ready. Don’t worry if you’re new to Python projects, we’ll go step by step.
Step 1: Install Python
Make sure you have Python 3.10 or higher installed. You can check this by running:
python — version
If it’s older than 3.10, download the latest version from python.org.
Step 2: Create a Virtual Environment
It’s best practice to isolate your project with a virtual environment so dependencies don’t conflict.
python -m venv .venv
source .venv/bin/activate # On Mac/Linux
.venv\Scripts\activate # On Windows
Now, all packages you install will stay inside this project.
Step 3: Install Required Libraries
We need LangChain, the MCP adapter, and the A2A SDK. Let’s install them all in one go:
pip install - pre -U langchain # LangChain core
pip install -U langchain-openai # OpenAI connector (or langchain-anthropic if you prefer)
pip install langchain-mcp-adapters # MCP adapter for LangChain
pip install mcp # For creating custom MCP servers
pip install a2a-sdk # Google’s Agent-to-Agent SDK
This gives you everything you need: LangChain to build agents, MCP for tools, and A2A for collaboration.
Step 4: Set Up API Keys
Most agents need an LLM behind them. If you’re using OpenAI or Anthropic, grab an API key and set it as an environment variable:
export OPENAI_API_KEY=”your_api_key_here” # Mac/Linux
setx OPENAI_API_KEY “your_api_key_here” # Windows
If you’re following Google’s A2A examples with Gemini, you’ll also need:
export GOOGLE_API_KEY=”your_api_key_here”
Tip: You can keep keys in a .env file and load them automatically using python-dotenv.
Step 5: Verify the Installation
Run this quick check in Python:
import langchain
import mcp
import a2a_sdk
print(”LangChain:”, langchain.__version__)
print(”MCP installed”)
print(”A2A installed”)
If you see versions and no errors, you’re good to go!
With the environment ready, the next step is to build your first MCP tool, a small, simple function that agents can call. This will give us the foundation to connect everything together.
Building Your First MCP Tool
Now that our environment is ready, let’s create a simple MCP tool server. Think of this as a service that exposes a function your agent can call, just like a mini-API, but built specifically for agents.
Step 1: Create a New Python File
Make a new file called math_server.py. This will be our MCP server.
Step 2: Write the MCP Tool
Here’s a small MCP server that provides an add function to add two numbers:
# math_server.py
from mcp.server.fastmcp import FastMCP
# Create an MCP server instance
mcp = FastMCP(”MathServer”)
# Expose a function as a tool
@mcp.tool()
def add(a: int, b: int) -> int:
“”“Add two numbers and return the result.”“”
return a + b
# Start the server
if __name__ == “__main__”:
mcp.run(transport=”stdio”)
Step 3: Run the MCP Server
In your terminal, run:
python math_server.py
This starts the MCP server and makes the add tool available.
Step 4: How It Works
FastMCP makes it easy to spin up an MCP server with minimal code.
The @mcp.tool() decorator marks a function as a tool agents can call.
The server runs using the stdio transport (standard input/output), which is the simplest way to connect to agents.
Step 5: Next Step — Connecting to LangChain
Now that we have a working MCP tool, we’ll connect it to a LangChain agent. This will let our agent automatically call the add function when it needs to perform arithmetic.
Connecting LangChain to MCP Tools
We now have a math MCP server running, but it’s not very useful until an agent can actually call it. This is where LangChain comes in. With LangChain, we can create an agent that reasons about a user’s request and automatically calls our MCP tool when needed.
Step 1: Create a New Python File
Make a new file called agent_with_mcp.py.
Step 2: Connect the MCP Client
LangChain provides a MultiServerMCPClient that can connect to one or more MCP servers. Let’s use it to hook into our math server:
# agent_with_mcp.py
import asyncio
from langchain_openai import ChatOpenAI
from langchain.agents import create_openai_functions_agent
from langchain_mcp_adapters.client import MultiServerMCPClient
async def main():
# Connect to the MCP math server
client = MultiServerMCPClient({
“math”: {
“transport”: “stdio”,
“command”: “python”,
“args”: [”math_server.py”], # path to your MCP server
}
})
# Get the tools from the MCP server
tools = await client.get_tools()
# Create an LLM (using OpenAI here, but you can use Anthropic, Gemini, etc.)
llm = ChatOpenAI(model=”gpt-4o-mini”)
# Create a LangChain agent that knows about the MCP tools
agent = create_openai_functions_agent(llm, tools)
# Test the agent
result = await agent.ainvoke({”input”: “What is 5 plus 7?”})
print(”Agent Response:”, result)
if __name__ == “__main__”:
asyncio.run(main())
Step 3: Run the Agent
In one terminal, make sure your MCP server is available:
python math_server.py
Then, in another terminal, run:
python agent_with_mcp.py
You should see the agent respond with something like:
Agent Response: 12
Step 4: How It Works
The MultiServerMCPClient connects to one or more MCP servers.
get_tools() fetches all available functions (in this case, just add).
The LangChain agent uses its LLM reasoning to decide when to call the add tool.
When you ask “What is 5 plus 7?”, the agent realizes it can’t answer directly, calls the MCP tool, and returns the result.
Now that our agent can call external tools, the next step is to let agents talk to each other. This is where the Agent-to-Agent (A2A) Protocol comes into play.
Building an A2A-Enabled Agent
So far, we’ve built a LangChain agent and connected it to an MCP tool. That’s powerful, but it’s still a single agent. What if we want multiple agents to discover each other and share tasks? That’s where the Agent-to-Agent (A2A) Protocol comes in.
With A2A, each agent exposes a simple JSON “card” describing its capabilities and runs a lightweight server that other agents can talk to. Agents can then send structured tasks to one another, just like humans passing around assignments.
Step 1: Create an Agent Card
Each agent needs a .well-known/agent.json file that describes its capabilities. Let’s make one for our math agent:
{
“agentName”: “CalcAgent”,
“version”: “1.0”,
“description”: “Performs arithmetic calculations”,
“protocol”: “A2A”,
“capabilities”: [”math.add”]
}
Save this in a folder (for example, calc_agent/.well-known/agent.json).
Step 2: Build an A2A Server
Next, we’ll use the a2a-sdk to serve this agent. Create a new file called calc_agent.py:
# calc_agent.py
import asyncio
from a2a_sdk.server import A2AServer
from a2a_sdk.types import Task
# Define how the agent handles tasks
async def handle_task(task: Task):
user_message = task[”messages”][0][”parts”][0][”text”]
if “add” in user_message:
# Very simple parser for “Add X and Y”
numbers = [int(s) for s in user_message.split() if s.isdigit()]
result = sum(numbers)
return f”The sum is {result}”
return “Sorry, I only know how to add numbers.”
async def main():
server = A2AServer(agent_card_path=”./.well-known/agent.json”, task_handler=handle_task)
await server.start()
if __name__ == “__main__”:
asyncio.run(main())
Step 3: Run the Agent
Start your math agent by running:
python calc_agent.py
It will now listen for A2A requests on its configured endpoint (default localhost port).
Step 4: Send a Task from Another Agent
To test collaboration, create a second agent (e.g., client_agent.py) that sends tasks to CalcAgent:
# client_agent.py
import asyncio
from a2a_sdk.client import A2AClient
async def main():
client = A2AClient(remote_agent_url=”http://localhost:8000”) # adjust port if needed
task = {
“task”: {
“taskId”: “task1”,
“state”: “submitted”,
“messages”: [
{”role”: “user”, “parts”: [{”text”: “Add 5 and 7”}]}
]
}
}
response = await client.send_task(task)
print(”Response from CalcAgent:”, response)
if __name__ == “__main__”:
asyncio.run(main())
Run this file in a second terminal. If everything is working, you’ll see:
Response from CalcAgent: The sum is 12
Step 5: Why This Matters
Discovery: The agent card (agent.json) lets other agents know what tasks this agent supports.
Communication: A2A defines a standard task format (taskId, state, messages).
Collaboration: One agent can now delegate work to another, making multi-agent workflows possible.
Now that we have agents that can talk to each other, the final step is to combine everything: a LangChain agent that uses MCP tools internally and can also collaborate with other agents via A2A.
Bringing It All Together: Multi-Agent Collaboration
We now have all the pieces:
LangChain to power reasoning.
MCP tools to give agents external capabilities.
A2A so agents can share tasks with each other.
Let’s combine them into a mini multi-agent system.
Scenario: Math + Spelling Collaboration
We’ll build two agents:
CalcAgent — Uses LangChain + MCP to perform calculations.
SpellingAgent — Uses LangChain to turn numbers into words.
The two agents will communicate via A2A:
The user asks: “Add 12 and 7, then spell out the result.”
CalcAgent computes 19 using its MCP math tool.
CalcAgent then sends an A2A task to SpellingAgent, asking to convert 19 into words.
SpellingAgent responds with “nineteen.”
The final answer is returned to the user.
Step 1: MCP-Powered CalcAgent
We already have math_server.py from earlier. Now, let’s build an A2A-enabled CalcAgent that uses this server.
# calc_agent.py
import asyncio
from langchain_openai import ChatOpenAI
from langchain.agents import create_openai_functions_agent
from langchain_mcp_adapters.client import MultiServerMCPClient
from a2a_sdk.server import A2AServer
from a2a_sdk.types import Task
async def handle_task(task: Task):
user_input = task[”messages”][0][”parts”][0][”text”]
# Connect to MCP server
client = MultiServerMCPClient({
“math”: {
“transport”: “stdio”,
“command”: “python”,
“args”: [”math_server.py”],
}
})
tools = await client.get_tools()
# Create agent
llm = ChatOpenAI(model=”gpt-4o-mini”)
agent = create_openai_functions_agent(llm, tools)
# Ask the agent
result = await agent.ainvoke({”input”: user_input})
return str(result)
async def main():
server = A2AServer(
agent_card_path=”./.well-known/agent.json”,
task_handler=handle_task
)
await server.start()
if __name__ == “__main__”:
asyncio.run(main())
Step 2: SpellingAgent
The SpellingAgent doesn’t need MCP; it just uses LangChain + an LLM to convert numbers to words.
# spelling_agent.py
import asyncio
from langchain_openai import ChatOpenAI
from a2a_sdk.server import A2AServer
from a2a_sdk.types import Task
llm = ChatOpenAI(model=”gpt-4o-mini”)
async def handle_task(task: Task):
user_input = task[”messages”][0][”parts”][0][”text”]
response = await llm.ainvoke(f”Convert {user_input} into English words.”)
return response.content
async def main():
server = A2AServer(
agent_card_path=”./.well-known/agent.json”,
task_handler=handle_task
)
await server.start()
if __name__ == “__main__”:
asyncio.run(main())
Step 3: Orchestrating the Workflow
Finally, let’s write a client agent that coordinates the process.
# orchestrator.py
import asyncio
from a2a_sdk.client import A2AClient
async def main():
# First ask CalcAgent
calc_client = A2AClient(”http://localhost:8000”) # CalcAgent URL
calc_task = {
“task”: {
“taskId”: “task1”,
“state”: “submitted”,
“messages”: [{”role”: “user”, “parts”: [{”text”: “Add 12 and 7”}]}]
}
}
calc_response = await calc_client.send_task(calc_task)
number_result = calc_response[”result”]
# Now ask SpellingAgent
spelling_client = A2AClient(”http://localhost:8005”) # SpellingAgent URL
spell_task = {
“task”: {
“taskId”: “task2”,
“state”: “submitted”,
“messages”: [{”role”: “user”, “parts”: [{”text”: number_result}]}]
}
}
spell_response = await spelling_client.send_task(spell_task)
print(”Final Answer:”, number_result, “-”, spell_response[”result”])
if __name__ == “__main__”:
asyncio.run(main())
Step 4: Run the System
Run math_server.py (MCP tool).
Run calc_agent.py (A2A + MCP).
Run spelling_agent.py (A2A + LangChain).
Run orchestrator.py.
You should see:
Final Answer: 19 — nineteen
Why This Is Powerful
LangChain handled reasoning.
MCP exposed tools the agent could call.
A2A allowed agents to collaborate across tasks.
This is the foundation of building complex, modular AI systems where specialized agents handle their part of the job and then pass results along.
Troubleshooting & Tips
As you experiment with LangChain, MCP, and A2A, it’s normal to run into small hiccups. Here are some common issues and how to fix them.
Environment & Dependencies
Python version — Make sure you’re running Python 3.10 or later. Older versions may cause errors when installing a2a-sdk or langchain.
Virtual environments — Always use a virtual environment (venv, conda, or uv) to avoid dependency conflicts.
MCP Issues
Server not found — If your agent can’t connect to the MCP server, double-check the path to your script in MultiServerMCPClient.
Transport problems — Start with stdio transport (simplest). If you try HTTP, ensure you specify the correct port and the server is listening.
Tool not available — If the agent doesn’t see your MCP function, make sure it’s decorated with @mcp.tool() and that the server is running.
LangChain Issues
Missing packages — If you get ModuleNotFoundError, check that you installed langchain, langchain-openai, and langchain-mcp-adapters.
API keys — LLMs like OpenAI require a valid API key. Store it in environment variables or a .env file. A missing key usually results in an authentication error.
Async errors — Both LangChain MCP clients and A2A use asyncio. Wrap your code in async def main(): and run with asyncio.run(main()).
A2A Issues
Agent card not found — Make sure the .well-known/agent.json file exists and is referenced correctly in your server.
Port conflicts — Each agent should run on a unique port (e.g., CalcAgent on 8000, SpellingAgent on 8005). If you see an “Address already in use” error, change the port.
Response format — A2A expects structured responses (taskId, state, messages). If your agent just returns plain text, wrap it properly before sending back.
Debugging Tips
Print logs — Add print() statements to your handlers to see what messages/tasks are being passed around.
Start small — Test MCP and LangChain together first, then add A2A. Trying to debug everything at once can get overwhelming.
Use sample agents — The A2A and MCP GitHub repos include sample projects. Compare your code against them if you get stuck.
Congratulations, you’ve just built a working multi-agent AI system using:
LangChain for reasoning,
MCP for external tools, and
A2A for collaboration.
This beginner project may be simple, but it introduces the core patterns of modern AI engineering: modularity, interoperability, and multi-agent workflows. From here, you can expand your agents with new MCP tools, add more specialized collaborators, or even deploy them across different environments.
The future of AI isn’t just one giant model, it’s teams of agents working together, and now you have the skills to start building them.
Next Steps & Further Learning
You’ve built your first multi-agent system, congratulations! But this is just the beginning. There are many ways you can extend what you’ve learned and explore the ecosystem further.
Add More MCP Tools
Right now, your agent only knows how to add numbers. Why not expand? You could:
Build an MCP server for weather lookup (via a public API).
Add a file reader tool so your agent can process text or CSV data.
Create a knowledge base query tool that pulls from your own documents.
Every new MCP server you add gives your agents new skills without changing their core logic.
Experiment with Multi-Agent Patterns
You’ve seen two agents collaborate, but multi-agent systems can grow much larger. Try experimenting with:
Specialized agents — One agent for math, one for spelling, one for summarization.
Coordinator agents — An orchestrator that delegates tasks to the right specialists.
Peer-to-peer communication — Agents that negotiate or share partial results back and forth.
The A2A protocol makes it possible to stitch together entire ecosystems of agents.
Explore Deployment Options
Running locally is great for learning, but you can go further by deploying agents so they run 24/7:
Docker — Containerize your MCP servers and A2A agents.
Cloud services — Deploy on AWS, GCP, or Azure.
Serverless — Run MCP tools as serverless functions that agents can call on-demand.
Learn More from the Communities
Here are a few good resources to deepen your knowledge:
LangChain Documentation — Guides, API references, and tutorials.
Model Context Protocol GitHub — MCP specs, servers, and client libraries.
Google Agent-to-Agent SDK — Examples and tools for building A2A-enabled agents.
Dream Bigger
With these building blocks, you could create:
A personal research assistant that queries the web, summarizes, and shares results between agents.
A data pipeline of agents, one extracts data, another cleans it, another loads it into a database.
An AI workplace team where different agents handle tasks like scheduling, writing reports, or generating insights.
Multi-agent AI isn’t science fiction, it’s already here, and you now have the foundation to start building. The future belongs to teams of agents, and you’ve just taken the first step toward becoming their architect.