
Table of contents
Open Table of contents
Introduction
The OpenAI Agents SDK is a powerful tool to build AI agent-based applications using Python. It includes small set of key components:
- Agents: LLMs equipped with instructions and tools
- Handoffs: Allow agents to delegate to other agents for specific tasks
- Guardrails: Enable the inputs/outputs to agents to be validated
- Sessions: Maintains conversation history across agent runs automatically
these primitives are powerfuk enough to allow you to buikld real-world applications to manage tasks and conversations. With built-in tracing and evaluation tools, it’s production-ready and easy to debug or fine-tune.
You can install the OpenAI library using the following command:
pip install openai-agentssrc/content.config.ts
Before running or implementing any agent, you need to generate an OpenAI API key — it starts with sk-.
First Agent Example
As mentioned above, this code consists of three main components: agents, handoffs, and guardrails. Let’s dive deeper into each of them.
from agents import Agent, InputGuardrail, GuardrailFunctionOutput, Runner
from agents.exceptions import InputGuardrailTripwireTriggered
from pydantic import BaseModel
import asyncio
class HomeworkOutput(BaseModel):
is_homework: bool
reasoning: str
guardrail_agent = Agent(
name="Guardrail check",
instructions="Check if the user is asking about homework.",
output_type=HomeworkOutput,
)
math_tutor_agent = Agent(
name="Math Tutor",
handoff_description="Specialist agent for math questions",
instructions="You provide help with math problems. Explain your reasoning at each step and include examples",
)
history_tutor_agent = Agent(
name="History Tutor",
handoff_description="Specialist agent for historical questions",
instructions="You provide assistance with historical queries. Explain important events and context clearly.",
)
async def homework_guardrail(ctx, agent, input_data):
result = await Runner.run(guardrail_agent, input_data, context=ctx.context)
final_output = result.final_output_as(HomeworkOutput)
return GuardrailFunctionOutput(
output_info=final_output,
tripwire_triggered=not final_output.is_homework,
)
triage_agent = Agent(
name="Triage Agent",
instructions="You determine which agent to use based on the user's homework question",
handoffs=[history_tutor_agent, math_tutor_agent],
input_guardrails=[
InputGuardrail(guardrail_function=homework_guardrail),
],
)
async def main():
# Example 1: History question
try:
result = await Runner.run(triage_agent, "who was the first president of the united states?")
print(result.final_output)
except InputGuardrailTripwireTriggered as e:
print("Guardrail blocked this input:", e)
# Example 2: General/philosophical question
try:
result = await Runner.run(triage_agent, "What is the meaning of life?")
print(result.final_output)
except InputGuardrailTripwireTriggered as e:
print("Guardrail blocked this input:", e)
if __name__ == "__main__":
asyncio.run(main())
src/content.config.ts
AGENT
An agent takes inputs, processes them according to its role and outputs the structured responses. Here are the agents used in this example:
guardrail_agent = Agent(...) # Checks if user input is a homework-related question.
math_tutor_agent = Agent(...) # Solves math homework questions.
history_tutor_agent = Agent(...) # Handles history homework questions.
triage_agent = Agent(...) # Decides which agent should answer the user's input.src/content.config.ts
HANDOFF
Handoff allows an agent to delegate a task to another agent based on intent or input. The triage_agent uses handoffs to:
-
Route math questions to math_tutor_agent
-
Route history questions to history_tutor_agent
So, triage_agent is like a smart router or dispatcher.
GAURDRAIL
Guardrails validate or restrict inputs before letting them reach an agent.
InputGuardrail(guardrail_function=homework_guardrail)src/content.config.ts
- Runner.run(…) runs the guardrail_agent to check if the input is a homework question.
- If not (tripwire_triggered=True), the input is blocked and not sent to triage_agent.
- First example response is :
“The first President of the United States was George Washington. He served from 1789 to 1797 and was unanimously elected as the nation’s first leader. Washington played a crucial role in the founding of the United States and is often called the ‘Father of His Country.’ Prior to his presidency, he served as the Commander-in-Chief of the Continental Army during the American Revolutionary War. Washington set many precedents for the role of the presidency in the new nation.” - The second question response is:
“Guardrail blocked this input: Guardrail InputGuardrail triggered tripwire”
Sessions
Sessions stores conversation history for a specific session, allowing agents to maintain context without requiring explicit manual memory management. This is particularly useful for building chat applications or multi-turn conversations where you want the agent to remember previous interactions.
How it works
When session memory is enabled:
- Before each run: The runner automatically retrieves the conversation history for the session and prepends it to the input items.
- After each run: All new items generated during the run (user input, assistant responses, tool calls, etc.) are automatically stored in the session.
- Context preservation: Each subsequent run with the same session includes the full conversation history, allowing the agent to maintain context.
from agents import Agent, Runner, SQLiteSession
import os
from dotenv import load_dotenv
import asyncio
load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")
client = OpenAI(api_key=api_key)
async def main():
# Create agent
agent = Agent(
name= "Assistant",
instructions="Reply very concisely"
)
support_agent = Agent(name="Support")
billing_agent = Agent(name="Billing")
#Create a session instance with a session ID
session_1 = SQLiteSession("Consersation_123","conversations.db")
session_2 = SQLiteSession("Consersation_123","conversations.db")
session_3= SQLiteSession("user_123","conversations.db")
#First turn
result = await Runner.run(
agent,
input="What is the golden Gate Bridge in?",
session=session_1
)
print(result.final_output)
# Second turn
result = await Runner.run(
agent,
input="What is the capital of Paris",
session=session_2
)
print(result.final_output)
#Also work with synchronous runner
result = await Runner.run(
agent,
input="What is the population?",
session=session_2
)
print(result.final_output)
result = await Runner.run(
support_agent,
input="help me with my account",
session=session_3
)
print(result.final_output)
result = await Runner.run(
billing_agent,
input="What are my charges?",
session=session_3)
print(result.final_output)
if __name__ == "__main__":
asyncio.run(main())src/content.config.ts
Streaming
Streaming lets you subscribe to updates of the agent run as it proceeds. This can be useful for showing the end-user progress updates and partial responses.
import asyncio
from openai.types.responses import ResponseTextDeltaEvent
from agents import Agent, Runner
async def main():
agent = Agent(
name="Joker",
instructions="You are a helpful assistant.",
)
result = Runner.run_streamed(agent, input="Please tell me 5 jokes.")
async for event in result.stream_events():
if event.type == "raw_response_event" and isinstance(event.data, ResponseTextDeltaEvent):
print(event.data.delta, end="", flush=True)
if __name__ == "__main__":
asyncio.run(main())src/content.config.ts