LangChain & LangGraph
Test your LangChain and LangGraph agents without API keys or network calls. Point your LLM at aimock and get deterministic, fixture-driven responses.
Quick Start
LangChain's ChatOpenAI accepts a base_url parameter. Point it at
aimock and every call goes through your fixtures instead of the real API.
from langchain_openai import ChatOpenAI
# Start aimock first: npx aimock --fixtures ./fixtures
llm = ChatOpenAI(
base_url="http://localhost:4010/v1",
api_key="test",
)
result = llm.invoke("hello")
print(result.content) # deterministic fixture response
This works with any LangChain component that wraps an OpenAI-compatible API:
ChatOpenAI or any provider using the base_url parameter. For
Azure OpenAI, see the Azure OpenAI guide.
With aimock-pytest
The aimock-pytest plugin starts and stops the server automatically per test.
The aimock fixture exposes a .url property you pass as the base
URL.
def test_my_chain(aimock):
# Load fixtures before making LLM calls
aimock.load_fixtures("./fixtures/langchain.json")
llm = ChatOpenAI(
base_url=f"{aimock.url}/v1",
api_key="test",
)
result = llm.invoke("hello")
assert "Hi" in result.content
Install with pip install aimock-pytest. The fixture handles server lifecycle
so your tests stay fast and isolated.
Multi-Turn Agent Loops (LangGraph)
LangGraph agents make multiple sequential LLM calls as they reason, call tools, and
synthesize results. A single user request like "plan a trip" might trigger three or more
completions in a loop. While later calls include tool results, the original user message
persists across the loop, so aimock's sequenceIndex matcher tracks how many
times a given message pattern has matched.
{
"fixtures": [
{
"match": { "userMessage": "plan a trip", "sequenceIndex": 0 },
"response": { "content": "I'll help plan your trip. Let me look up some options." }
},
{
"match": { "userMessage": "plan a trip", "sequenceIndex": 1 },
"response": {
"toolCalls": [
{ "name": "search_flights", "arguments": "{\"origin\":\"SFO\",\"dest\":\"NRT\"}" }
]
}
},
{
"match": { "userMessage": "plan a trip", "sequenceIndex": 2 },
"response": { "content": "I found 3 flights from SFO to Tokyo Narita. The best option is..." }
}
]
}
Each fixture fires once in order. The first LLM call matches index 0, the second matches index 1, and so on. This lets you script the exact behavior of a multi-step agent without any real API calls.
See Fixtures and Sequential Responses for the full matching reference.
Tool Call Fixtures
LangChain's tool-calling agents expect the LLM to return structured
tool_calls in the response. Use the toolCalls response field to
return them from aimock.
{
"fixtures": [
{
"match": { "userMessage": "what's the weather in SF" },
"response": {
"toolCalls": [
{
"name": "get_weather",
"arguments": "{\"location\":\"San Francisco\",\"unit\":\"fahrenheit\"}"
}
]
}
}
]
}
When LangChain receives this response, it will invoke the get_weather tool
with the given arguments, just as it would with a real OpenAI response. Combine tool call
fixtures with sequenceIndex to script the full loop: tool call, tool result
injection, then final answer.
Record & Replay
Don't want to write fixtures by hand? Record a real LangGraph session and replay it in tests. Start aimock in record mode, run your agent against a live provider, and aimock saves every request/response pair as a fixture file.
npx aimock --record --provider-openai https://api.openai.com -f ./fixtures
Then point your LangChain code at http://localhost:4010/v1 and run your agent
normally. Every LLM call is captured. On subsequent runs, aimock replays the recorded
responses without network calls.
# Replay mode (default when fixtures exist)
npx aimock -f ./fixtures
# Run your tests against the recorded fixtures
pytest tests/
See Record & Replay for the full reference.
CI with GitHub Action
The CopilotKit/aimock GitHub Action starts aimock as a background service in
CI. Your tests run against fixtures with zero external dependencies.
- uses: CopilotKit/aimock@v1
with:
fixtures: ./test/fixtures
- run: pytest
env:
OPENAI_BASE_URL: http://127.0.0.1:4010/v1
No API keys needed in CI. No flaky tests from rate limits or network timeouts. See GitHub Action for all available options.