BedrockModelConfig
Defined in: src/models/bedrock.ts:211
Configuration interface for AWS Bedrock model provider.
Extends BaseModelConfig with Bedrock-specific configuration options for model parameters, caching, and additional request/response fields.
Example
Section titled “Example”const config: BedrockModelConfig = { modelId: 'global.anthropic.claude-sonnet-4-6', maxTokens: 1024, temperature: 0.7, cacheConfig: { strategy: 'auto' }}Extends
Section titled “Extends”Extended by
Section titled “Extended by”Properties
Section titled “Properties”maxTokens?
Section titled “maxTokens?”optional maxTokens?: number;Defined in: src/models/bedrock.ts:217
Maximum number of tokens to generate in the response.
https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InferenceConfiguration.html
Overrides
Section titled “Overrides”temperature?
Section titled “temperature?”optional temperature?: number;Defined in: src/models/bedrock.ts:224
Controls randomness in generation.
https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InferenceConfiguration.html
Overrides
Section titled “Overrides”optional topP?: number;Defined in: src/models/bedrock.ts:231
Controls diversity via nucleus sampling.
https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InferenceConfiguration.html
Overrides
Section titled “Overrides”stopSequences?
Section titled “stopSequences?”optional stopSequences?: string[];Defined in: src/models/bedrock.ts:236
Array of sequences that will stop generation when encountered.
cacheConfig?
Section titled “cacheConfig?”optional cacheConfig?: CacheConfig;Defined in: src/models/bedrock.ts:243
Configuration for prompt caching.
https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-caching.html
additionalRequestFields?
Section titled “additionalRequestFields?”optional additionalRequestFields?: JSONValue;Defined in: src/models/bedrock.ts:248
Additional fields to include in the Bedrock request.
additionalResponseFieldPaths?
Section titled “additionalResponseFieldPaths?”optional additionalResponseFieldPaths?: string[];Defined in: src/models/bedrock.ts:253
Additional response field paths to extract from the Bedrock response.
additionalArgs?
Section titled “additionalArgs?”optional additionalArgs?: JSONValue;Defined in: src/models/bedrock.ts:259
Additional arguments to pass through to the Bedrock Converse API.
stream?
Section titled “stream?”optional stream?: boolean;Defined in: src/models/bedrock.ts:269
Whether or not to stream responses from the model.
This will use the ConverseStream API instead of the Converse API.
- https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html
- https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html
includeToolResultStatus?
Section titled “includeToolResultStatus?”optional includeToolResultStatus?: boolean | "auto";Defined in: src/models/bedrock.ts:277
Flag to include status field in tool results.
true: Always include status fieldfalse: Never include status field'auto': Automatically determine based on model ID (default)
guardrailConfig?
Section titled “guardrailConfig?”optional guardrailConfig?: BedrockGuardrailConfig;Defined in: src/models/bedrock.ts:283
Guardrail configuration for content filtering and safety controls.
https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails.html
useNativeTokenCount?
Section titled “useNativeTokenCount?”optional useNativeTokenCount?: boolean;Defined in: src/models/bedrock.ts:292
Whether to use the native Bedrock CountTokens API.
When true (default), countTokens() calls the Bedrock CountTokens API for
accurate counts. When false, skips the API call and uses the character-based
heuristic estimator.
modelId?
Section titled “modelId?”optional modelId?: string;Defined in: src/models/model.ts:91
The model identifier. This typically specifies which model to use from the provider’s catalog.
Inherited from
Section titled “Inherited from”contextWindowLimit?
Section titled “contextWindowLimit?”optional contextWindowLimit?: number;Defined in: src/models/model.ts:124
Maximum context window size in tokens for the model.
This value represents the total token capacity shared between input and output. When not provided, it is automatically resolved from a built-in lookup table based on the configured model ID. An explicit value always takes precedence.
When modelId is changed via updateConfig(), this value is automatically
re-resolved if it was initially auto-populated. Explicitly set values are preserved.