PydanticAI
Test your PydanticAI agents with deterministic responses. aimock handles structured output validation, tool calls, and streaming — all without API keys.
Quick Start
PydanticAI agents accept a custom base_url through their model configuration.
Point it at aimock and use any string for the API key:
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIChatModel
from pydantic_ai.providers.openai import OpenAIProvider
model = OpenAIChatModel(
"gpt-4o",
provider=OpenAIProvider(
base_url="http://localhost:4010/v1",
api_key="test",
),
)
agent = Agent(model)
Start aimock with a fixture file, then run your agent:
# Terminal 1 — start aimock
npx aimock --fixtures ./fixtures
# Terminal 2 — run the agent
python agent.py
With aimock-pytest
The aimock-pytest plugin starts and stops the server automatically per test.
Install with pip install aimock-pytest. The aimock
fixture is provided automatically — just request it in your test function.
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIChatModel
from pydantic_ai.providers.openai import OpenAIProvider
async def test_agent_responds(aimock):
# Load fixtures before making LLM calls
aimock.load_fixtures("./fixtures/pydanticai.json")
model = OpenAIChatModel(
"gpt-4o",
provider=OpenAIProvider(
base_url=aimock.url + "/v1",
api_key="test",
),
)
agent = Agent(model)
result = await agent.run("What is the weather?")
assert result.output is not None
Structured Output
PydanticAI validates LLM responses against Pydantic models. When your agent expects
structured output, PydanticAI uses tool calls by default to extract
structured data — not response_format. This means your fixture should
return a tool call whose arguments match your Pydantic schema. Use aimock’s
toolCalls fixture to serve the expected structured response:
{
"fixtures": [
{
"match": { "userMessage": "Weather" },
"response": {
"toolCalls": [
{
"name": "final_result",
"arguments": "{\"city\": \"SF\", \"temp\": 72, \"unit\": \"fahrenheit\"}"
}
]
}
}
]
}
Note: PydanticAI generates a tool named final_result (by
default) whose schema matches your output_type model. The LLM
“calls” this tool with the structured data, and PydanticAI validates the
arguments against your Pydantic model. If you need to use
response_format instead, you can opt in via
Agent(..., result_tool_name=None), in which case a
responseFormat fixture would apply:
// Only needed if you disable tool-based structured output
{
"fixtures": [
{
"match": { "responseFormat": "json_object" },
"response": {
"content": "{\"city\": \"SF\", \"temp\": 72, \"unit\": \"fahrenheit\"}"
}
}
]
}
The corresponding PydanticAI agent with a typed output:
from pydantic import BaseModel
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIChatModel
from pydantic_ai.providers.openai import OpenAIProvider
class Weather(BaseModel):
city: str
temp: int
unit: str
model = OpenAIChatModel(
"gpt-4o",
provider=OpenAIProvider(
base_url="http://localhost:4010/v1",
api_key="test",
),
)
agent = Agent(model, output_type=Weather)
result = agent.run_sync("Weather in San Francisco")
# result.output is a validated Weather instance
assert result.output.city == "SF"
assert result.output.temp == 72
Tool Calls
PydanticAI tools use typed function arguments. When an agent invokes a tool, the LLM
returns a tool call that PydanticAI validates and dispatches. Use aimock’s
toolCalls fixture to return deterministic tool invocations:
{
"fixtures": [
{
"match": { "userMessage": "weather" },
"response": {
"toolCalls": [
{
"name": "get_weather",
"arguments": "{\"city\": \"San Francisco\"}"
}
]
}
}
]
}
The PydanticAI agent that registers and handles the tool:
from pydantic_ai import Agent, RunContext
from pydantic_ai.models.openai import OpenAIChatModel
from pydantic_ai.providers.openai import OpenAIProvider
model = OpenAIChatModel(
"gpt-4o",
provider=OpenAIProvider(
base_url="http://localhost:4010/v1",
api_key="test",
),
)
agent = Agent(model)
@agent.tool
async def get_weather(ctx: RunContext[None], city: str) -> str:
return f"72F and sunny in {city}"
result = agent.run_sync("What's the weather?")
# aimock triggers the tool call, PydanticAI dispatches get_weather
CI with GitHub Action
Use the aimock GitHub Action to run a mock server alongside your Python test suite. No API keys or network access required:
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- uses: CopilotKit/aimock@v1
with:
fixtures: ./fixtures
- run: pip install pydantic-ai pytest
- run: pytest
env:
OPENAI_BASE_URL: http://127.0.0.1:4010/v1
OPENAI_API_KEY: test
The action starts aimock as a background service on port 4010. Your tests connect via
OPENAI_BASE_URL and never hit a real API. See the
GitHub Action page for all available inputs.