Switching from openai-responses to aimock

openai-responses intercepts httpx calls to mock OpenAI responses inline. aimock runs a real HTTP server with fixture files—no monkey-patching, works with any framework, supports streaming.

Before / After

Side-by-side: mocking a chat completion with openai-responses vs. aimock.

openai-responses (~14 lines) py
import openai_responses
from openai import OpenAI

@openai_responses.mock()
def test_chat(openai_mock):
    openai_mock.chat.completions.create.response = {
        "choices": [{
            "index": 0,
            "message": {"role": "assistant", "content": "Hello!"},
            "finish_reason": "stop"
        }]
    }
    client = OpenAI()
    result = client.chat.completions.create(
        model="gpt-4o", messages=[{"role": "user", "content": "hi"}]
    )
    assert result.choices[0].message.content == "Hello!"
aimock (6 lines) py
from openai import OpenAI

def test_chat(aimock):
    aimock.on_message("hi", content="Hello!")
    client = OpenAI(base_url=aimock.url + "/v1", api_key="test")
    result = client.chat.completions.create(
        model="gpt-4o", messages=[{"role": "user", "content": "hi"}]
    )
    assert result.choices[0].message.content == "Hello!"

Or with fixture files:

fixtures/chat.json json
{
  "fixtures": [
    {
      "match": { "userMessage": "hi" },
      "response": { "content": "Hello!" }
    }
  ]
}
test_chat.py py
def test_chat(aimock):
    aimock.load_fixtures("./fixtures/chat.json")
    client = OpenAI(base_url=aimock.url + "/v1", api_key="test")
    result = client.chat.completions.create(
        model="gpt-4o", messages=[{"role": "user", "content": "hi"}]
    )
    assert result.choices[0].message.content == "Hello!"

Step-by-step migration

  1. Install aimock-pytest (replaces pip install openai-responses)
    pip install aimock-pytest
  2. Convert inline mock definitions to fixture files or on_message() calls. Instead of constructing the full response envelope manually, just specify content or toolCalls—aimock generates the envelope automatically.
  3. Replace the decorator@openai_responses.mock() becomes the aimock pytest fixture parameter (auto-provided by aimock-pytest).
  4. Point the client at aimock — replace OpenAI() with OpenAI(base_url=aimock.url + "/v1", api_key="test").

Feature mapping

openai-responses aimock
@openai_responses.mock() decorator aimock pytest fixture (auto-provided)
openai_mock.chat.completions.create.response = {...} aimock.on_message("pattern", content="...")
Partial envelope (choices required, other fields auto-filled) Just specify content or toolCalls—envelope auto-generated
openai_mock.chat.completions.create.route.calls aimock.journal
Python + OpenAI only Python, TypeScript, any LLM provider
Manual SSE chunk construction (streaming.EventStream) Automatic SSE streaming from fixture content
No fixture files JSON fixture files with match patterns
httpx monkey-patching Real HTTP server—works with any HTTP client
✗ No record/replay npx aimock --record captures real API calls
✗ No CI action CopilotKit/aimock@v1 GitHub Action
✗ No chaos testing Error injection, mid-stream disconnects

What you gain

🌐

Cross-process, cross-language

Your Python tests, Node.js frontend, Go microservices, and Playwright E2E tests all hit the same mock server. No per-language patching.

Built-in SSE for 10+ providers

OpenAI, Claude, Gemini, Bedrock, Azure, Vertex AI, Ollama, Cohere. No manual chunk construction.

🔌

WebSocket APIs

OpenAI Realtime, Responses WS, Gemini Live. openai-responses cannot intercept WebSocket.

Record & Replay

Proxy real APIs, save as fixtures, replay forever. npx aimock --record --provider-openai https://api.openai.com

📁

Fixture files

JSON files on disk. Version-controlled. Shared across tests. No hand-crafted envelopes.

💥

Chaos testing

Inject latency, drop chunks, corrupt payloads mid-stream. Test your error handling under realistic failure conditions.

CI with GitHub Action

One step replaces all the httpx monkey-patching infrastructure in CI.

.github/workflows/test.yml yaml
name: Tests
on: [push, pull_request]
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-python@v5
        with:
          python-version: "3.12"

      - name: Start aimock
        uses: CopilotKit/aimock@v1
        with:
          fixtures: ./fixtures

      - run: pip install -r requirements.txt
      - run: pytest
        env:
          OPENAI_BASE_URL: http://127.0.0.1:4010/v1
          OPENAI_API_KEY: mock-key

CLI / Docker quick start

CLI sh
npx aimock -p 4010 -f ./fixtures
Docker sh
docker run -d -p 4010:4010 \
  -v $(pwd)/fixtures:/fixtures \
  ghcr.io/copilotkit/aimock:latest \
  -p 4010 -f /fixtures