Switching from MSW to aimock

MSW is excellent for general API mocking. But when your AI app spans multiple processes—Playwright, Next.js, agent workers, Docker—MSW's in-process interception can't reach them. aimock runs a real HTTP server that any process can hit.

The 5-minute switch

Side-by-side: streaming SSE with MSW vs. aimock.

MSW — streaming SSE (~35 lines) ts
import { http, HttpResponse } from 'msw'
import { setupServer } from 'msw/node'

const server = setupServer(
  http.post('https://api.openai.com/v1/chat/completions', () => {
    const encoder = new TextEncoder()
    const stream = new ReadableStream({
      start(controller) {
        controller.enqueue(encoder.encode(
          'data: {"choices":[{"delta":{"role":"assistant"}}]}\n\n'
        ))
        controller.enqueue(encoder.encode(
          'data: {"choices":[{"delta":{"content":"Hello"}}]}\n\n'
        ))
        controller.enqueue(encoder.encode(
          'data: {"choices":[{"delta":{"content":" there"}}]}\n\n'
        ))
        controller.enqueue(encoder.encode('data: [DONE]\n\n'))
        controller.close()
      }
    })
    return new HttpResponse(stream, {
      headers: { 'Content-Type': 'text/event-stream' }
    })
  })
)
server.listen()
aimock — same result (4 lines) ts
import { LLMock } from '@copilotkit/aimock'

const mock = new LLMock()
mock.onMessage("hello", { content: "Hello there" })
await mock.start()
// set OPENAI_BASE_URL = mock.url + "/v1"

Non-streaming comparison

For simple JSON responses, MSW is comparable:

MSW ts
http.post('/v1/chat/completions', () => {
  return HttpResponse.json({
    choices: [{ message: { content: "Hi" } }]
  })
})
aimock ts
mock.onMessage("hello", { content: "Hi" })

For non-streaming responses, the complexity is similar. aimock's advantage is consistency—the same fixture works for streaming, non-streaming, and WebSocket endpoints.

What you gain

🌐

Cross-process interception

Playwright → Next.js → agent workers → Docker. Every process on the machine hits the same mock.

Built-in SSE for 10+ providers

OpenAI, Claude, Gemini, Bedrock, Azure, Vertex AI, Ollama, Cohere. No manual chunk construction.

🔌

WebSocket APIs

OpenAI Realtime, Responses WS, Gemini Live. MSW cannot intercept WebSocket.

Record & Replay

Proxy real APIs, save as fixtures, replay forever. npx aimock --record --provider-openai https://api.openai.com

📁

Fixture files

JSON files on disk. Version-controlled. Shared across tests.

🧩

MCP + A2A + Vector

Mock your entire AI stack, not just LLM calls.

What you keep (or lose)

Capability MSW aimock Notes
Browser service worker Point browser app's API base URL at aimock instead
General REST/GraphQL mocking Keep MSW alongside for non-AI routes
Cross-process aimock's key advantage
Streaming SSE Manual Built-in 10+ providers
WebSocket 3 protocols
Record & replay
Chaos testing
Zero dependencies MSW ~300KB

Using aimock alongside MSW

You don't have to choose. Use MSW for general REST/GraphQL mocking and aimock for AI-specific endpoints.

test-setup.ts ts
// test setup
import { setupServer } from 'msw/node'
import { LLMock } from '@copilotkit/aimock'

// MSW for REST APIs
const mswServer = setupServer(
  http.get('/api/users', () => HttpResponse.json([...]))
)

// aimock for AI APIs
const aiMock = new LLMock({ port: 0 })
aiMock.onMessage("hello", { content: "Hi!" })

beforeAll(async () => {
  mswServer.listen()
  await aiMock.start()
  process.env.OPENAI_BASE_URL = aiMock.url + "/v1"
})

CLI / Docker quick start

CLI sh
npx aimock -p 4010 -f ./fixtures
Docker sh
docker run -d -p 4010:4010 \
  -v $(pwd)/fixtures:/fixtures \
  ghcr.io/copilotkit/aimock:latest \
  -p 4010 -f /fixtures