Image Generation

The image generation endpoints support both OpenAI POST /v1/images/generations and Gemini Imagen POST /v1beta/models/{model}:predict formats. Return single or multiple images as URLs or base64-encoded data.

Endpoints

Method Path Format
POST /v1/images/generations JSON (OpenAI)
POST /v1beta/models/{model}:predict JSON (Gemini Imagen)

Unit Test: Single Image URL

Using the programmatic API with vitest, register a fixture and assert on the response.

image-url.test.ts ts
import { LLMock } from "@copilotkit/aimock";
import { describe, it, expect, beforeAll, afterAll } from "vitest";

let mock: LLMock;

beforeAll(async () => {
  mock = new LLMock();
  await mock.start();
});

afterAll(async () => {
  await mock.stop();
});

it("returns a single image URL", async () => {
  mock.onImage("a sunset over mountains", {
    image: { url: "https://example.com/sunset.png" },
  });

  const res = await fetch(`${mock.url}/v1/images/generations`, {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      model: "dall-e-3",
      prompt: "a sunset over mountains",
      n: 1,
      size: "1024x1024",
    }),
  });

  const body = await res.json();
  expect(body.data[0].url).toBe("https://example.com/sunset.png");
});

Unit Test: Multiple Images

image-multiple.test.ts ts
it("returns multiple images", async () => {
  mock.onImage("cats", {
    images: [
      { url: "https://example.com/cat1.png" },
      { url: "https://example.com/cat2.png" },
    ],
  });

  const res = await fetch(`${mock.url}/v1/images/generations`, {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      model: "dall-e-3",
      prompt: "cats playing",
      n: 2,
    }),
  });

  const body = await res.json();
  expect(body.data).toHaveLength(2);
  expect(body.data[0].url).toBe("https://example.com/cat1.png");
  expect(body.data[1].url).toBe("https://example.com/cat2.png");
});

Unit Test: Base64 Response

image-base64.test.ts ts
it("returns base64-encoded image", async () => {
  mock.onImage("logo", {
    image: { b64_json: "iVBORw0KGgoAAAANSUhEUg..." },
  });

  const res = await fetch(`${mock.url}/v1/images/generations`, {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      model: "dall-e-3",
      prompt: "a company logo",
      response_format: "b64_json",
    }),
  });

  const body = await res.json();
  expect(body.data[0].b64_json).toBeDefined();
});

Unit Test: Gemini Imagen Format

image-gemini.test.ts ts
it("handles Gemini Imagen predict endpoint", async () => {
  mock.onImage("landscape", {
    image: { url: "https://example.com/landscape.png" },
  });

  const res = await fetch(
    `${mock.url}/v1beta/models/imagen-3.0-generate-002:predict`,
    {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({
        instances: [{ prompt: "a beautiful landscape" }],
        parameters: { sampleCount: 1 },
      }),
    }
  );

  const body = await res.json();
  expect(body.predictions).toBeDefined();
});

JSON Fixture

fixtures/images.json json
{
  "fixtures": [
    {
      "match": { "userMessage": "sunset" },
      "response": {
        "image": { "url": "https://example.com/sunset.png" }
      }
    },
    {
      "match": { "userMessage": "cats" },
      "response": {
        "images": [
          { "url": "https://example.com/cat1.png" },
          { "url": "https://example.com/cat2.png" }
        ]
      }
    }
  ]
}

Response Format

Matches the OpenAI /v1/images/generations response format:

Image fixtures use match.userMessage which maps to the prompt field in the request body. The prompt matcher checks for substring matches.

Record & Replay

When no fixture matches an incoming request, aimock can proxy it to the real API and record the response as a fixture for future replays. Enable recording with the --record flag or via RecordConfig in the programmatic API. Recorded image fixtures capture the url or b64_json from the provider response and save them to disk, so subsequent runs replay instantly without hitting the real API.

CLI sh
npx aimock --record --provider-openai https://api.openai.com