BedrockModelConfig
Defined in: src/models/bedrock.ts:209
Configuration interface for AWS Bedrock model provider.
Extends BaseModelConfig with Bedrock-specific configuration options for model parameters, caching, and additional request/response fields.
Example
Section titled “Example”const config: BedrockModelConfig = { modelId: 'global.anthropic.claude-sonnet-4-6', maxTokens: 1024, temperature: 0.7, cacheConfig: { strategy: 'auto' }}Extends
Section titled “Extends”Extended by
Section titled “Extended by”Properties
Section titled “Properties”maxTokens?
Section titled “maxTokens?”optional maxTokens?: number;Defined in: src/models/bedrock.ts:215
Maximum number of tokens to generate in the response.
https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InferenceConfiguration.html
Overrides
Section titled “Overrides”temperature?
Section titled “temperature?”optional temperature?: number;Defined in: src/models/bedrock.ts:222
Controls randomness in generation.
https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InferenceConfiguration.html
Overrides
Section titled “Overrides”optional topP?: number;Defined in: src/models/bedrock.ts:229
Controls diversity via nucleus sampling.
https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InferenceConfiguration.html
Overrides
Section titled “Overrides”stopSequences?
Section titled “stopSequences?”optional stopSequences?: string[];Defined in: src/models/bedrock.ts:234
Array of sequences that will stop generation when encountered.
cacheConfig?
Section titled “cacheConfig?”optional cacheConfig?: CacheConfig;Defined in: src/models/bedrock.ts:241
Configuration for prompt caching.
https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-caching.html
additionalRequestFields?
Section titled “additionalRequestFields?”optional additionalRequestFields?: JSONValue;Defined in: src/models/bedrock.ts:246
Additional fields to include in the Bedrock request.
additionalResponseFieldPaths?
Section titled “additionalResponseFieldPaths?”optional additionalResponseFieldPaths?: string[];Defined in: src/models/bedrock.ts:251
Additional response field paths to extract from the Bedrock response.
additionalArgs?
Section titled “additionalArgs?”optional additionalArgs?: JSONValue;Defined in: src/models/bedrock.ts:257
Additional arguments to pass through to the Bedrock Converse API.
stream?
Section titled “stream?”optional stream?: boolean;Defined in: src/models/bedrock.ts:267
Whether or not to stream responses from the model.
This will use the ConverseStream API instead of the Converse API.
- https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html
- https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html
includeToolResultStatus?
Section titled “includeToolResultStatus?”optional includeToolResultStatus?: boolean | "auto";Defined in: src/models/bedrock.ts:275
Flag to include status field in tool results.
true: Always include status fieldfalse: Never include status field'auto': Automatically determine based on model ID (default)
guardrailConfig?
Section titled “guardrailConfig?”optional guardrailConfig?: BedrockGuardrailConfig;Defined in: src/models/bedrock.ts:281
Guardrail configuration for content filtering and safety controls.
https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails.html
useNativeTokenCount?
Section titled “useNativeTokenCount?”optional useNativeTokenCount?: boolean;Defined in: src/models/bedrock.ts:292
Whether to use the native Bedrock CountTokens API.
When true, countTokens() calls the Bedrock CountTokens API for
accurate counts. When false or not set (default), skips the API call and uses
the character-based heuristic estimator.
Default Value
Section titled “Default Value”falsemodelId?
Section titled “modelId?”optional modelId?: string;Defined in: src/models/model.ts:91
The model identifier. This typically specifies which model to use from the provider’s catalog.
Inherited from
Section titled “Inherited from”contextWindowLimit?
Section titled “contextWindowLimit?”optional contextWindowLimit?: number;Defined in: src/models/model.ts:124
Maximum context window size in tokens for the model.
This value represents the total token capacity shared between input and output. When not provided, it is automatically resolved from a built-in lookup table based on the configured model ID. An explicit value always takes precedence.
When modelId is changed via updateConfig(), this value is automatically
re-resolved if it was initially auto-populated. Explicitly set values are preserved.