Skip to content

Prompts

In the Strands Agents SDK, system prompts and user messages are the primary way to communicate with AI models. The SDK provides a flexible system for managing prompts, including both system prompts and user messages.

System prompts provide high-level instructions to the model about its role, capabilities, and constraints. They set the foundation for how the model should behave throughout the conversation. You can specify the system prompt when initializing an agent:

from strands import Agent
agent = Agent(
system_prompt=(
"You are a financial advisor specialized in retirement planning. "
"Use tools to gather information and provide personalized advice. "
"Always explain your reasoning and cite sources when possible."
)
)

If you do not specify a system prompt, the model will behave according to its default settings.

These are your queries or requests to the agent. The SDK supports multiple techniques for prompting.

The simplest way to interact with an agent is through a text prompt:

response = agent("What is the time in Seattle")

The SDK supports multi-modal prompts, allowing you to include images, documents, and other content types in your messages:

with open("path/to/image.png", "rb") as fp:
image_bytes = fp.read()
response = agent([
{"text": "What can you see in this image?"},
{
"image": {
"format": "png",
"source": {
"bytes": image_bytes,
},
},
},
])

For a complete list of supported content types, please refer to the API Reference.

Prompting is a primary functionality of Strands that allows you to invoke tools through natural language requests. However, if at any point you require more programmatic control, Strands also allows you to invoke tools directly:

result = agent.tool.current_time(timezone="US/Pacific")

Direct tool calls bypass the natural language interface and execute the tool using specified parameters. These calls are added to the conversation history by default. However, you can opt out of this behavior by setting record_direct_tool_call=False in Python.

Crafting effective prompts is essential for building useful agents. While simple text instructions work for basic tasks, getting complex behavior out of agents benefits from more structured approaches.

Agent SOPs (Standard Operating Procedures) are a standardized markdown format for defining agent workflows in natural language. They hit a “determin-ish-tic” sweet spot between fully code-defined workflows and open-ended model-driven agents, providing structure for consistency while preserving the agent’s reasoning ability.

Here is a minimal example of an Agent SOP:

# Code Review SOP
## Parameters
- repo_path (REQUIRED): Path to the repository to review
## Steps
### Step 1: Understand the Changes
- MUST read the diff of all changed files
- SHOULD summarize what the changes are doing at a high level
### Step 2: Review for Issues
- MUST check for bugs, security vulnerabilities, and logic errors
- SHOULD flag any style or readability concerns
- MAY suggest alternative approaches where appropriate
### Step 3: Provide Feedback
- MUST output a structured review with file-level comments
- SHOULD categorize findings by severity (critical, warning, suggestion)

Following this Agent SOP format gives the benefits of understanding the agent’s behavior, debugging it when it does not follow instructions, and steering agents regardless of the underlying model.

Debugging and fixing system prompts is a difficult and expensive problem to face, usually involving costly evaluations to run and validate your agent is working as expected. Turning system prompts into SOPs makes the system prompt editing process straightforward and easy.

For more on authoring and using Agent SOPs, including SOP chaining for multi-phase workflows, see the Agent SOPs GitHub repository.

For guidance on writing safe and responsible prompts, including defending against prompt injection and adversarial attacks, refer to our Safety & Security - Prompt Engineering documentation.