openclaw-a2a

What happens when AI agents start talking to each other? Let's find out.

BETA A2A v1.0 npm Docker

What is this?

A bridge that lets any A2A-compatible agent chat with your self-hosted OpenClaw AI assistant. Think of it as giving your OpenClaw a phone number that other agents can call.

Google ADK Agent ---+ CrewAI Agent ------+-- A2A Protocol --> openclaw-a2a --> OpenClaw Gateway Custom Agent ------+ | Claude.ai* --------+ (streaming!) * via A2A-MCP bridge

Quick Install

npm
npm install -g openclaw-a2a@beta
openclaw-a2a --openclaw-url http://127.0.0.1:18789
Docker
docker pull ghcr.io/freema/openclaw-a2a:beta
docker run -p 3100:3100 \
  -e OPENCLAW_URL=http://host.docker.internal:18789 \
  ghcr.io/freema/openclaw-a2a:beta

Documentation

Features

A2A v1.0 Compliant

PascalCase methods, SCREAMING_SNAKE enums, field presence discrimination. Implemented from the proto spec.

Real-time Streaming

SSE streaming from OpenClaw Gateway through A2A protocol with 15s heartbeat keep-alive.

Multi-instance

Route requests to different OpenClaw instances based on message metadata.

Multi-turn

Conversation continuity via contextId and taskId on messages.

Who Can Talk To This?

Native A2A clients: Google ADK, CrewAI, Microsoft Agent Framework, BeeAI, LangGraph, or any HTTP client.

Claude users: via A2A-MCP bridge (Claude speaks MCP, not A2A... yet).

Links