Skip to content

BedrockModelConfig

Defined in: src/models/bedrock.ts:209

Configuration interface for AWS Bedrock model provider.

Extends BaseModelConfig with Bedrock-specific configuration options for model parameters, caching, and additional request/response fields.

const config: BedrockModelConfig = {
modelId: 'global.anthropic.claude-sonnet-4-6',
maxTokens: 1024,
temperature: 0.7,
cacheConfig: { strategy: 'auto' }
}
optional maxTokens?: number;

Defined in: src/models/bedrock.ts:215

Maximum number of tokens to generate in the response.

https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InferenceConfiguration.html

BaseModelConfig.maxTokens


optional temperature?: number;

Defined in: src/models/bedrock.ts:222

Controls randomness in generation.

https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InferenceConfiguration.html

BaseModelConfig.temperature


optional topP?: number;

Defined in: src/models/bedrock.ts:229

Controls diversity via nucleus sampling.

https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InferenceConfiguration.html

BaseModelConfig.topP


optional stopSequences?: string[];

Defined in: src/models/bedrock.ts:234

Array of sequences that will stop generation when encountered.


optional cacheConfig?: CacheConfig;

Defined in: src/models/bedrock.ts:241

Configuration for prompt caching.

https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-caching.html


optional additionalRequestFields?: JSONValue;

Defined in: src/models/bedrock.ts:246

Additional fields to include in the Bedrock request.


optional additionalResponseFieldPaths?: string[];

Defined in: src/models/bedrock.ts:251

Additional response field paths to extract from the Bedrock response.


optional additionalArgs?: JSONValue;

Defined in: src/models/bedrock.ts:257

Additional arguments to pass through to the Bedrock Converse API.

https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/client/bedrock-runtime/command/ConverseStreamCommand/


optional stream?: boolean;

Defined in: src/models/bedrock.ts:267

Whether or not to stream responses from the model.

This will use the ConverseStream API instead of the Converse API.


optional includeToolResultStatus?: boolean | "auto";

Defined in: src/models/bedrock.ts:275

Flag to include status field in tool results.

  • true: Always include status field
  • false: Never include status field
  • 'auto': Automatically determine based on model ID (default)

optional guardrailConfig?: BedrockGuardrailConfig;

Defined in: src/models/bedrock.ts:281

Guardrail configuration for content filtering and safety controls.

https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails.html


optional useNativeTokenCount?: boolean;

Defined in: src/models/bedrock.ts:292

Whether to use the native Bedrock CountTokens API.

When true, countTokens() calls the Bedrock CountTokens API for accurate counts. When false or not set (default), skips the API call and uses the character-based heuristic estimator.

false

optional modelId?: string;

Defined in: src/models/model.ts:91

The model identifier. This typically specifies which model to use from the provider’s catalog.

BaseModelConfig.modelId


optional contextWindowLimit?: number;

Defined in: src/models/model.ts:124

Maximum context window size in tokens for the model.

This value represents the total token capacity shared between input and output. When not provided, it is automatically resolved from a built-in lookup table based on the configured model ID. An explicit value always takes precedence.

When modelId is changed via updateConfig(), this value is automatically re-resolved if it was initially auto-populated. Explicitly set values are preserved.

BaseModelConfig.contextWindowLimit