Record & Replay
VCR-style record-and-replay support. When a request doesn't match any fixture, aimock proxies it to the real upstream provider, records the response as a fixture on disk and in memory, then replays it on subsequent identical requests.
How It Works
- Client sends a request to aimock
- aimock attempts fixture matching as usual
- On miss: the request is forwarded to the configured upstream provider
- The upstream response is relayed back to the client immediately
- The response is collapsed (if streaming) and saved as a fixture to disk and memory
- Subsequent identical requests match the newly recorded fixture
Quick Start
$ npx aimock --fixtures ./fixtures \
--record \
--provider-openai https://api.openai.com \
--provider-anthropic https://api.anthropic.com
$ docker run -d -p 4010:4010 \
-v ./fixtures:/fixtures \
ghcr.io/copilotkit/aimock \
npx aimock --fixtures /fixtures \
--record \
--provider-openai https://api.openai.com \
--provider-anthropic https://api.anthropic.com
CLI Flags
| Flag | Description |
|---|---|
--record |
Enable record mode (proxy-on-miss) |
--strict |
Strict mode: return 503 (not 404) on unmatched requests |
--provider-openai <url> |
Upstream URL for OpenAI |
--provider-anthropic <url> |
Upstream URL for Anthropic |
--provider-gemini <url> |
Upstream URL for Gemini |
--provider-vertexai <url> |
Upstream URL for Vertex AI |
--provider-bedrock <url> |
Upstream URL for Bedrock |
--provider-azure <url> |
Upstream URL for Azure OpenAI |
--provider-ollama <url> |
Upstream URL for Ollama |
--provider-cohere <url> |
Upstream URL for Cohere |
Programmatic API
import { LLMock } from "@copilotkit/aimock";
const mock = new LLMock();
await mock.start();
// Enable recording with upstream providers
mock.enableRecording({
providers: {
openai: "https://api.openai.com",
anthropic: "https://api.anthropic.com",
},
fixturePath: "./fixtures/recorded",
});
// Make requests — unmatched ones are proxied and recorded
// ...
// Disable recording — recorded fixtures persist on disk
mock.disableRecording();
Stream Collapsing
When the upstream provider returns a streaming response, aimock collapses it into a non-streaming fixture. Six streaming formats are supported:
| Format | Provider | Content-Type |
|---|---|---|
| OpenAI SSE | OpenAI, Azure | text/event-stream |
| Anthropic SSE | Anthropic | text/event-stream |
| Gemini SSE | Gemini, Vertex AI | text/event-stream |
| Cohere SSE | Cohere | text/event-stream |
| Ollama NDJSON | Ollama | application/x-ndjson |
| Bedrock EventStream | AWS Bedrock | application/vnd.amazon.eventstream |
The collapse extracts text content and tool calls from streaming chunks and produces a
simple { content } or { toolCalls } fixture response.
Auth Header Forwarding
When proxying to upstream providers, aimock forwards these headers from the original request:
authorizationx-api-keycontent-typeaccept
Auth headers are never saved in recorded fixtures. The fixture only contains the match criteria (derived from the last user message) and the response content.
Strict Mode
When --strict is enabled, unmatched requests that cannot be proxied (no
upstream configured for that provider) return 503 Service Unavailable
instead of the default 404. This is useful for CI environments where you want to catch
unexpected API calls.
Fixture Auto-Generation
Recorded fixtures are saved to disk with timestamped filenames:
// fixtures/recorded/openai-2025-01-15T10-30-00-000Z-0.json
{
"fixtures": [
{
"match": { "userMessage": "What is the weather?" },
"response": { "content": "I don't have real-time weather data..." }
}
]
}
Match criteria are derived from the original request: the last user message becomes
userMessage, or for embedding requests, the input becomes
inputText. If no match criteria can be derived (e.g., empty messages), the
fixture is saved to disk with a warning but not registered in memory.
Fixture Lifecycle
-
On disk: Fixtures persist in the configured
fixturePathdirectory (default:./fixtures/recorded) - In memory: Recorded fixtures are immediately available for matching subsequent requests in the same session
- After restart: Load the recorded fixture directory to replay previous recordings
Local Development Workflow
Record once against real APIs, then replay from fixtures for fast, offline development.
# First run: record real API responses
$ npx aimock --record --provider-openai https://api.openai.com -f ./fixtures
# Subsequent runs: replay from recorded fixtures
$ npx aimock -f ./fixtures
# First run: record real API responses
$ docker run -d -p 4010:4010 \
-v ./fixtures:/fixtures \
ghcr.io/copilotkit/aimock \
npx aimock --record --provider-openai https://api.openai.com -f /fixtures
# Subsequent runs: replay from recorded fixtures
$ docker run -d -p 4010:4010 \
-v ./fixtures:/fixtures \
ghcr.io/copilotkit/aimock \
npx aimock -f /fixtures
CI Pipeline Workflow
Use the Docker image in CI with --strict mode to ensure every request matches
a recorded fixture. No API keys needed, no flaky network calls.
- name: Start aimock
run: |
docker run -d --name aimock \
-v ./fixtures:/fixtures \
-p 4010:4010 \
ghcr.io/copilotkit/aimock \
npx aimock --strict -f /fixtures
- name: Run tests
env:
OPENAI_BASE_URL: http://localhost:4010/v1
run: pnpm test
- name: Stop aimock
run: docker stop aimock
Building Fixture Sets
A practical workflow for building and maintaining fixture sets:
- Run with
--recordagainst real APIs during development - Review recorded fixtures in
fixtures/recorded/ - Move and rename to organized fixture directories
- Switch to
--strictmode in CI - Re-record when upstream APIs change (drift detection catches this)
Cross-Language Testing
The Docker image serves any language that speaks HTTP. Point your client at the mock server's URL instead of the real API.
# Docker image serves all languages
docker run -d -p 4010:4010 ghcr.io/copilotkit/aimock npx aimock -f /fixtures
# Python
import openai
client = openai.OpenAI(base_url="http://localhost:4010/v1", api_key="mock")
# Go
client := openai.NewClient(option.WithBaseURL("http://localhost:4010/v1"))
# Rust
let client = Client::new().with_base_url("http://localhost:4010/v1");