Build AI agents
you can actually steer.

Any model. Any cloud. Open source for Python and TypeScript.

pip install strands-agents
npm install @strands-agents/sdk

5,800+ GitHub stars · Built from production systems inside Amazon · Works with any model

agent.py
from strands import Agent, tool

@tool
def weather(city: str) -> dict:
    """Get current weather for a city."""
    return fetch_weather(city)

agent = Agent(tools=[weather])
agent("What's the weather in Seattle?")
from strands import Agent, tool

@tool
def weather(city: str) -> dict:
    """Get current weather for a city."""
    return fetch_weather(city)

agent = Agent(tools=[weather])
agent("What's the weather in Seattle?")

Write code, not pipelines

Early agent frameworks wrapped models in orchestration logic because models couldn't reason reliably. Now they can. Strands gives you back control: define your tools as functions, write a system prompt, and the agent loop handles execution. No step definitions, no workflow graphs. Just code.

research_agent.py
from strands import Agent, tool
from strands_tools import http_request
from pathlib import Path

@tool
def save_report(title: str, content: str) -> str:
    """Save a research report to disk."""
    Path(f"reports/{title}.md").write_text(content)
    return f"Saved {title}.md"

# The model decides how to classify, research, and draft
agent = Agent(
    system_prompt="""You are a research assistant. Classify the topic,
    research it using the web, then save a summary report.""",
    tools=[http_request, save_report],
)

agent("Summarize recent AI agent papers")
from strands import Agent, tool
from strands_tools import http_request
from pathlib import Path

@tool
def save_report(title: str, content: str) -> str:
    """Save a research report to disk."""
    Path(f"reports/{title}.md").write_text(content)
    return f"Saved {title}.md"

# The model decides how to classify, research, and draft
agent = Agent(
    system_prompt="""You are a research assistant. Classify the topic,
    research it using the web, then save a summary report.""",
    tools=[http_request, save_report],
)

agent("Summarize recent AI agent papers")

The model handles orchestration. When it makes a mistake, a plugin handles recovery. Your agent code stays the same.

Your agent ignored your instructions again

You wrote the rules. The model skipped them. Longer prompts make it worse: by line 40, the model is guessing which instructions still matter. Hard-coded workflows are the other extreme: predictable but brittle, and they strip away the reasoning that makes agents useful.

the_old_way.py
agent = Agent(
    system_prompt="""You are a report generator. Always use markdown
    tables for comparisons. Never use bullet lists for data. Format
    currency as $X,XXX.XX. Include a summary section at the top.
    When comparing more than 3 items, split into sub-tables.
    Use ISO 8601 dates. Cite sources with inline links.
    If the user asks about competitors, stay neutral.
    Never share internal pricing..."""
    # The model will follow some of these. Guess which ones.
)
agent = Agent(
    system_prompt="""You are a report generator. Always use markdown
    tables for comparisons. Never use bullet lists for data. Format
    currency as $X,XXX.XX. Include a summary section at the top.
    When comparing more than 3 items, split into sub-tables.
    Use ISO 8601 dates. Cite sources with inline links.
    If the user asks about competitors, stay neutral.
    Never share internal pricing..."""
    # The model will follow some of these. Guess which ones.
)

Middleware for the agent loop

Steering hooks intercept the agent loop the same way middleware intercepts HTTP requests. Before a tool call, check the inputs. After a model response, validate the output. Each handler is a Python function you can read, test, and debug.

steering_handler.py
from strands import Agent
from strands.vended_plugins.steering import (
    SteeringHandler, ToolSteeringAction,
)

class NoPricingLeaks(SteeringHandler):
    async def steer_before_tool(self, *, agent, tool_use, **kwargs):
        if tool_use["name"] == "send_email":
            if "internal pricing" in str(tool_use["input"]):
                return ToolSteeringAction.guide(
                    "Contains internal pricing. Redact before sending."
                )
        return ToolSteeringAction.proceed()

agent = Agent(
    tools=[send_email, generate_report],
    plugins=[NoPricingLeaks()],
)
from strands import Agent
from strands.vended_plugins.steering import (
    SteeringHandler, ToolSteeringAction,
)

class NoPricingLeaks(SteeringHandler):
    async def steer_before_tool(self, *, agent, tool_use, **kwargs):
        if tool_use["name"] == "send_email":
            if "internal pricing" in str(tool_use["input"]):
                return ToolSteeringAction.guide(
                    "Contains internal pricing. Redact before sending."
                )
        return ToolSteeringAction.proceed()

agent = Agent(
    tools=[send_email, generate_report],
    plugins=[NoPricingLeaks()],
)
100% task accuracy with steering
Prompt-only agents scored 82.5%. Hard-coded workflows scored 80.8%. Steered agents recovered from every mistake.
Read the blog post →

Everything you need to build agents

Any Model Provider

Bedrock, OpenAI, Anthropic, Ollama, LiteLLM. Swap providers with a single line. Your agent code doesn't change.

# Swap providers in one line
from strands.models.openai import OpenAIModel

agent = Agent(model=OpenAIModel(
    model_id="gpt-4o"
))
# Swap providers in one line
from strands.models.openai import OpenAIModel

agent = Agent(model=OpenAIModel(
    model_id="gpt-4o"
))

Tools from Any Function

Turn any function into an agent tool with @tool. The docstring becomes the LLM's tool description. No schema files, no registration boilerplate.

# Any function becomes a tool
@tool
def search_db(query: str) -> list:
    """Search the product database."""
    return db.search(query)
# Any function becomes a tool
@tool
def search_db(query: str) -> list:
    """Search the product database."""
    return db.search(query)

Native MCP Support

Connect to any MCP server. Use thousands of community tools without writing integration code.

# Connect to any MCP server
from strands.tools.mcp import MCPClient
from mcp import stdio_client, StdioServerParameters

mcp = MCPClient(lambda: stdio_client(
    StdioServerParameters(
        command="uvx",
        args=["my-mcp-server"],
    )
))
# Connect to any MCP server
from strands.tools.mcp import MCPClient
from mcp import stdio_client, StdioServerParameters

mcp = MCPClient(lambda: stdio_client(
    StdioServerParameters(
        command="uvx",
        args=["my-mcp-server"],
    )
))

Multi-Agent Systems

Compose agents with graphs, swarms, workflows, or simple agent-as-tool patterns. Built-in A2A protocol support for distributed systems.

# Agents as tools for other agents
@tool
def research(query: str) -> str:
    """Research a topic thoroughly."""
    agent = Agent(tools=[search_web])
    return str(agent(query))

writer = Agent(tools=[research])
writer("Write a post about AI agents")
# Agents as tools for other agents
@tool
def research(query: str) -> str:
    """Research a topic thoroughly."""
    agent = Agent(tools=[search_web])
    return str(agent(query))

writer = Agent(tools=[research])
writer("Write a post about AI agents")

Conversation Memory

Sliding window, summarization, and session persistence out of the box. Manage context across long conversations without manual token counting.

# Manage context automatically
from strands.agent.conversation_manager import (
    SlidingWindowConversationManager,
)

agent = Agent(
    conversation_manager=SlidingWindowConversationManager(
        window_size=5
    ),
)
# Manage context automatically
from strands.agent.conversation_manager import (
    SlidingWindowConversationManager,
)

agent = Agent(
    conversation_manager=SlidingWindowConversationManager(
        window_size=5
    ),
)

Built-in Observability

OpenTelemetry traces, metrics, and logs with no extra instrumentation. See every tool call, model invocation, and token count.

# Traces with zero config
from strands import Agent

agent = Agent(trace_attributes={
    "service": "my-app",
    "env": "production",
})
# Traces with zero config
from strands import Agent

agent = Agent(trace_attributes={
    "service": "my-app",
    "env": "production",
})

At Smartsheet, we chose Strands for our next generation of AI capabilities because it provided the perfect balance of enterprise-ready features and development efficiency. Its robust conversation memory and dynamic tool registration systems were crucial for creating a responsive, context-aware intelligent AI assistant. With Strands, we were able to quickly implement a secure and scalable solution, giving us a production-ready foundation to deliver a secure, high-performance, and enterprise-grade AI experience.

JB Brown VP Engineering, Smartsheet

Strands’ SDK and great integration with AWS native services streamlined Landchecker’s development of agents. With easier integration of AgentCore Runtime, Bedrock Guardrails, and built-in support for OpenTelemetry, we could focus on what we do best – developing property information tools and data integrations.

Rupert de Guzman CTO, Landchecker

At Swisscom, we need an agentic AI backbone that is both enterprise-ready and future-proof. Strands Agents gives us the best of both worlds: a native fit with our cloud environment, yet fully open source and flexible. That combination allowed us to build proof-of-concepts within just a few weeks and now sets us on the path to scale multi-agent systems with confidence, while keeping our focus on delivering real value to customers and the business.

Arun Sittampalam Director of Product Management, Swisscom

We chose Strands because it’s AWS-native, intuitive, and made agent development accessible across our engineering team. Its abstraction layer and built-in multi-agent patterns (like Agent-as-Tool and Swarm) let us focus on remediation logic instead of infrastructure work. We’ve already built multiple agents, and wiring them together has been seamless. On top of that, we layered our Agentic Remediation™ capability to automate vulnerability fixes and configuration validation/fault correction workflows, coordinating cross-agent remediation with precision

Ben Seri CTO & Co-Founder, Zafran Security

Scaling our global trading platform required reimagining our support capabilities, and Strands Agents was the key to making it happen at enterprise scale. What would traditionally take months of development, Strands allowed us to achieve in just 10 days - delivering a secure, robust, production-ready agentic solution. The results speak for themselves: investigation time dropped on average from 30 minutes to 45 seconds, investigation quality improved by 94%, and we saved $5M in operational costs. Strands didn’t just accelerate our development - it gave us the confidence to explore other agentic AI use cases across our entire business, including launching our Agentic Security Operations Center

Bryn Newell CIO, Eightcap

We see Strands as a great fit to power TeamForm’s next evolution of Agentic AI. Our customers need enterprise-grade security and scalability, which is exactly what Strands delivers. Its seamless integration with AWS and simplicity enables us to focus on innovating our AI capabilities and delivering value to our customers.

David Marozas Head of Engineering, TeamForm

For Jit’s infrastructure drift detection agent, we leverage Strands Agents, an open-source framework developed by AWS for building production-ready AI agents. Strands Agents provides several advantages including simplified development, native AWS integration, and built-in security.

David Melamed Co-Founder & CTO, Jit

Strands Agents on Bedrock turns autonomous agents into an enterprise product: governed, observable, and safe by design. Together with Claude models, we analyze live webpages and generate code responsibly - helping customers reduce risk while accelerating delivery. Safety is non-negotiable in offensive security. On Amazon Bedrock, Strands Agents plus Claude let us scale autonomous pen-testing with Bedrock Guardrails - increasing coverage without increasing risk.

Gal Malachi Co-founder & CTO, Terra Security

The combination of the Strands Agents SDK and Tavily represents a significant advancement in enterprise-grade research agent development. This integration can help organizations build sophisticated, secure, and scalable AI agents while maintaining the highest standards of security and performance. Learn more in this blog.

Dean Sacoransky Forward Deployed Engineering Leader, Tavily