Skip to content

BedrockModel

Defined in: src/models/bedrock.ts:324

AWS Bedrock model provider implementation.

Implements the Model interface for AWS Bedrock using the Converse Stream API. Supports streaming responses, tool use, prompt caching, and comprehensive error handling.

const provider = new BedrockModel({
modelConfig: {
modelId: 'global.anthropic.claude-sonnet-4-6',
maxTokens: 1024,
temperature: 0.7
},
clientConfig: {
region: 'us-west-2'
}
})
const messages: Message[] = [
{ type: 'message', role: 'user', content: [{ type: 'textBlock', text: 'Hello!' }] }
]
for await (const event of provider.stream(messages)) {
if (event.type === 'modelContentBlockDeltaEvent' && event.delta.type === 'textDelta') {
process.stdout.write(event.delta.text)
}
}
new BedrockModel(options?): BedrockModel;

Defined in: src/models/bedrock.ts:358

Creates a new BedrockModel instance.

ParameterTypeDescription
options?BedrockModelOptionsOptional configuration for model and client

BedrockModel

// Minimal configuration with defaults
const provider = new BedrockModel({
region: 'us-west-2'
})
// With model configuration
const provider = new BedrockModel({
region: 'us-west-2',
modelId: 'global.anthropic.claude-sonnet-4-6',
maxTokens: 2048,
temperature: 0.8,
cacheConfig: { strategy: 'auto' }
})
// With client configuration
const provider = new BedrockModel({
region: 'us-east-1',
clientConfig: {
credentials: myCredentials
}
})

Model.constructor

get modelId(): string;

Defined in: src/models/model.ts:205

The model ID from the current configuration, if configured.

string

Model.modelId


get stateful(): boolean;

Defined in: src/models/model.ts:221

Whether this model manages conversation state server-side.

When true, the server tracks conversation context across turns, so the SDK sends only the latest message instead of the full history. After each invocation, the agent’s local message history is cleared automatically.

Model providers that support server-side state management should override this to return true.

boolean

false by default

Model.stateful

updateConfig(modelConfig): void;

Defined in: src/models/bedrock.ts:450

Updates the model configuration. Merges the provided configuration with existing settings.

ParameterTypeDescription
modelConfigBedrockModelConfigConfiguration object with model-specific settings to update

void

// Update temperature and maxTokens
provider.updateConfig({
temperature: 0.9,
maxTokens: 2048
})

Model.updateConfig


getConfig(): BedrockModelConfig;

Defined in: src/models/bedrock.ts:465

Retrieves the current model configuration.

BedrockModelConfig

The current configuration object

const config = provider.getConfig()
console.log(config.modelId)

Model.getConfig


countTokens(messages, options?): Promise<number>;

Defined in: src/models/bedrock.ts:479

Count tokens using Bedrock’s native CountTokens API.

Uses the same message format as the Converse API to get accurate token counts directly from the Bedrock service. Falls back to the base class heuristic on failure.

ParameterTypeDescription
messagesMessage[]Array of conversation messages to count tokens for
options?CountTokensOptionsOptional options containing system prompt and tool specs

Promise<number>

Total input token count

Model.countTokens


stream(messages, options?): AsyncIterable<ModelStreamEvent>;

Defined in: src/models/bedrock.ts:535

Streams a conversation with the Bedrock model. Returns an async iterable that yields streaming events as they occur.

ParameterTypeDescription
messagesMessage[]Array of conversation messages
options?StreamOptionsOptional streaming configuration

AsyncIterable<ModelStreamEvent>

Async iterable of streaming events

{ContextWindowOverflowError} When input exceeds the model’s context window

{ModelThrottledError} When Bedrock service throttles requests

const messages: Message[] = [
{ type: 'message', role: $1, content: [{ type: 'textBlock', text: 'What is 2+2?' }] }
]
const options: StreamOptions = {
systemPrompt: 'You are a helpful math assistant.',
toolSpecs: [calculatorTool]
}
for await (const event of provider.stream(messages, options)) {
if (event.type === 'modelContentBlockDeltaEvent') {
console.log(event.delta)
}
}

Model.stream


streamAggregated(messages, options?): AsyncGenerator<
| ContentBlock
| ModelStreamEvent, StreamAggregatedResult, undefined>;

Defined in: src/models/model.ts:307

Streams a conversation with aggregated content blocks and messages. Returns an async generator that yields streaming events and content blocks, and returns the final message with stop reason and optional metadata.

This method enhances the basic stream() by collecting streaming events into complete ContentBlock and Message objects, which are needed by the agentic loop for tool execution and conversation management.

The method yields:

  • ModelStreamEvent - Original streaming events (passed through)
  • ContentBlock - Complete content block (emitted when block completes)

The method returns:

  • StreamAggregatedResult containing the complete message, stop reason, and optional metadata

All exceptions thrown from this method are wrapped in ModelError to provide a consistent error type for model-related errors. Specific error subtypes like ContextWindowOverflowError, ModelThrottledError, and MaxTokensError are preserved.

ParameterTypeDescription
messagesMessage[]Array of conversation messages
options?StreamOptionsOptional streaming configuration

AsyncGenerator< | ContentBlock | ModelStreamEvent, StreamAggregatedResult, undefined>

Async generator yielding ModelStreamEvent | ContentBlock and returning a StreamAggregatedResult

ModelError - Base class for all model-related errors

ContextWindowOverflowError - When input exceeds the model’s context window

ModelThrottledError - When the model provider throttles requests

MaxTokensError - When the model reaches its maximum token limit

Model.streamAggregated