Start with LLM mocking. Add MCP, A2A, vector, and more when you need them. The aimock suite grows with your stack.
npm install @copilotkit/aimock
{ "match": { "userMessage": "Hello" }, "response": { "content": "Hi there! How can I help?" }, "opts": { "chunkSize": 10, "latency": 1000 } }
Proxy unmatched requests to real APIs and capture every response.
Fixtures written to disk automatically as clean, editable JSON.
Deterministic responses in CI, forever. No API keys, no flakiness.
$ npx aimock --record --provider-openai https://api.openai.com ⚡ Listening on http://localhost:4010 ⚠ NO FIXTURE MATCH — proxying to https://api.openai.com/v1/chat/completions ✓ Recorded → fixtures/recorded/openai-2026-03-31T22:15:00.json ⚡ Fixture match — replaying from disk
One JSON config. One port. Every service your AI app depends on.
{ "llm": { "fixtures": "./fixtures/llm", "providers": ["openai", "claude", "gemini"] }, "mcp": { "tools": "./fixtures/mcp/tools.json", "resources": "./fixtures/mcp/resources.json" }, "a2a": { "agents": "./fixtures/a2a/agents.json" }, "vector": { "provider": "pinecone", "fixtures": "./fixtures/vector" } }
$ npx aimock --config aimock.json ⚡ aimock v1.0.0 ✓ LLM mounted at /v1/chat/completions ✓ LLM mounted at /v1/messages ✓ LLM mounted at /v1/embeddings ✓ MCP mounted at /mcp/tools/* ✓ A2A mounted at /a2a/agents/* ✓ Vector mounted at /vectors/* ⚡ Listening on http://localhost:4010 6 services · 24 fixtures loaded
OpenAI, Claude, Gemini, Bedrock, Azure, Vertex AI, Ollama, Cohere — full streaming and embeddings support for every provider.
Mock tools, resources, and prompts with full session management. Test your MCP integrations without running real tool servers.
Agent cards, message routing, and SSE streaming. Mock multi-agent interactions with deterministic responses.
Pinecone, Qdrant, and ChromaDB compatible. Mock similarity search, upserts, and index operations with fixtures.
Drop, malformed, or disconnect at any probability. Verify your app gracefully handles every failure mode.
Daily CI validation against real APIs. Know immediately when provider behavior changes break your fixtures.
aimock's drift detection runs daily against live provider APIs. When response formats change, you know immediately — not when your tests break in production.
Daily CI hits actual OpenAI, Anthropic, Gemini endpoints to capture current response formats.
Compares real responses against aimock's fixture format. Schema changes are caught instantly.
Drift detected → PR opened → fixtures, skills, and docs updated automatically. Zero manual effort.
| Capability | aimock | MSW | VidaiMock | mock-llm | piyook/llm-mock |
|---|---|---|---|---|---|
| Cross-process interception | Real server ✓ | In-process only | ✓ | ✓ (Docker) | ✓ |
| Chat Completions SSE | Built-in ✓ | manual | ✓ | ✓ | ✓ |
| Responses API SSE | Built-in ✓ | manual | ✓ | ✓ | ✓ |
| Claude Messages API | Built-in ✓ | manual | ✓ | ✗ | ✓ |
| Gemini streaming | Built-in ✓ | manual | ✓ | ✗ | ✓ |
| WebSocket APIs | Built-in ✓ | ✗ | ✗ | ✗ | ✗ |
| Multi-provider support | 11 providers ✓ | manual | 11 providers | OpenAI only | OpenAI only |
| Embeddings API | Built-in ✓ | ✗ | ✓ | ✗ | ✓ |
| Structured output / JSON mode | Built-in ✓ | manual | ✗ | ✗ | ✗ |
| Sequential / stateful responses | Built-in ✓ | manual | ✗ | ✓ | ✗ |
| Fixture files | JSON ✓ | Code-only | Tera templates | YAML config | JSON templates |
| Programmatic API | ✓ (TypeScript/JS) | ✓ (TypeScript/JS) | No (binary only) | ✗ | ✗ |
| Request journal | ✓ | manual | ✗ | ✗ | ✓ |
| Error injection | ✓ | ✓ | partial | ✗ | ✗ |
| Docker + Helm | Both ✓ | ✗ | Docker only | ✓ (Both) | Docker only |
| Drift detection | ✓ | ✗ | ✗ | ✗ | ✗ |
| Chaos testing | Built-in ✓ | ✗ | ✓ | ✗ | ✗ |
| Record & replay | Built-in ✓ | ✗ | ✗ | ✗ | ✗ |
| Prometheus metrics | Built-in ✓ | ✗ | ✓ | ✗ | ✗ |
| Streaming physics | Built-in ✓ | ✗ | ✓ | ✗ | ✗ |
| MCP tool mocking | Built-in ✓ | ✗ | ✗ | ✗ | ✗ |
| A2A agent mocking | Built-in ✓ | ✗ | ✗ | ✗ | ✗ |
| Vector DB mocking | Built-in ✓ | ✗ | ✗ | ✗ | ✗ |
| Search & rerank | Built-in ✓ | ✗ | ✗ | ✗ | ✗ |
| Dependencies | Zero ✓ | ~300KB | Zero (Rust) | Node+Express | Minimal |
AG-UI uses aimock for its end-to-end test suite, verifying AI agent behavior across LLM providers with fixture-driven responses in the codebase.