API Reference
openai_agents_testkit.models.FakeModel
Bases: Model
Fake model that returns predefined responses without calling any API.
This model simulates API latency and returns configurable responses, making it ideal for testing agent behavior without actual API calls.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
delay
|
float
|
Simulated response delay in seconds. Defaults to 0.1. |
0.1
|
response_factory
|
ResponseFactory | None
|
Optional callable that generates response text. Receives (call_id, input) and returns the response string. |
None
|
Example
from openai_agents_testkit import FakeModel, FakeModelProvider from agents import Agent, Runner, RunConfig
provider = FakeModelProvider(delay=0.1) agent = Agent(name="Test", model="fake-model", instructions="Test") result = Runner.run_sync( ... agent, ... "Hello", ... run_config=RunConfig(model_provider=provider), ... )
Source code in src/openai_agents_testkit/models.py
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 | |
get_response(system_instructions, input, model_settings, tools, output_schema, handoffs, tracing, *, previous_response_id=None, conversation_id=None, prompt=None)
async
Return a fake response after simulated delay.
Records call details in call_history for test assertions.
Source code in src/openai_agents_testkit/models.py
reset()
stream_response(system_instructions, input, model_settings, tools, output_schema, handoffs, tracing, *, previous_response_id=None, conversation_id=None, prompt=None)
Streaming is not implemented for FakeModel.
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
Always raised as streaming is not supported. |
Source code in src/openai_agents_testkit/models.py
openai_agents_testkit.models.FakeModelProvider
Bases: ModelProvider
Fake model provider that returns FakeModel instances.
Manages a pool of FakeModel instances, one per model name, allowing consistent model access across multiple agent runs.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
delay
|
float
|
Response delay for all models. Defaults to 0.1. |
0.1
|
response_factory
|
ResponseFactory | None
|
Optional response factory for all models. |
None
|
Example
provider = FakeModelProvider(delay=0.5) model = provider.get_model("gpt-4")
Same instance returned for same model name
assert provider.get_model("gpt-4") is model
Source code in src/openai_agents_testkit/models.py
clear()
get_all_models()
Get all created model instances.
Useful for inspecting call counts across all models in tests.
get_model(model_name)
Get or create a FakeModel for the given model name.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_name
|
str | None
|
The model identifier (ignored, always returns FakeModel). |
required |
Returns:
| Type | Description |
|---|---|
Model
|
A FakeModel instance, cached per model_name. |
Source code in src/openai_agents_testkit/models.py
openai_agents_testkit.fixtures
Pytest fixtures for testing openai-agents-python applications.
This module provides pytest fixtures that are automatically available when openai-agents-testkit is installed, thanks to the pytest11 entry point.
Usage
In your test file, fixtures are auto-discovered
def test_my_agent(fake_model_provider): agent = Agent(name="Test", model="gpt-4", instructions="Test") result = Runner.run_sync( agent, "Hello", run_config=RunConfig(model_provider=fake_model_provider), ) assert "Fake response" in result.final_output
fake_model()
Provide a FakeModel instance for testing.
The model is reset after each test to ensure clean state.
Yields:
| Type | Description |
|---|---|
FakeModel
|
A FakeModel instance with default settings (0.1s delay). |
Example
def test_model_calls(fake_model): # Use fake_model directly or via provider assert fake_model.call_count == 0
Source code in src/openai_agents_testkit/fixtures.py
fake_model_provider()
Provide a FakeModelProvider instance for testing.
All models are cleared after each test to ensure clean state.
Yields:
| Type | Description |
|---|---|
FakeModelProvider
|
A FakeModelProvider instance with default settings. |
Example
def test_agent(fake_model_provider): agent = Agent(name="Test", model="gpt-4", instructions="Test") result = Runner.run_sync( agent, "Hello", run_config=RunConfig(model_provider=fake_model_provider), )
Source code in src/openai_agents_testkit/fixtures.py
fake_model_provider_factory()
Factory fixture for creating customized FakeModelProvider instances.
Use this when you need to customize delay or response_factory.
Yields:
| Type | Description |
|---|---|
Callable[..., FakeModelProvider]
|
A factory function that creates FakeModelProvider instances. |
Example
def test_slow_responses(fake_model_provider_factory): provider = fake_model_provider_factory(delay=2.0) # Test timeout handling...
def test_custom_responses(fake_model_provider_factory): def custom_response(call_id, input): return f"Custom: {input}" provider = fake_model_provider_factory(response_factory=custom_response)
Source code in src/openai_agents_testkit/fixtures.py
no_delay_provider()
Provide a FakeModelProvider with zero delay for fast tests.
Useful when you don't need to test timing behavior and want faster test execution.
Yields:
| Type | Description |
|---|---|
FakeModelProvider
|
A FakeModelProvider with delay=0. |
Source code in src/openai_agents_testkit/fixtures.py
pytest_configure(config)
Disable OpenAI agents tracing when testkit is loaded.
Tracing is a separate telemetry subsystem that sends data to OpenAI's API. FakeModelProvider only mocks LLM API calls, not tracing calls. This hook ensures tracing is disabled before any tests run, preventing 401 errors from invalid API keys in test environments.