GoModel routes Bedrock requests through the Bedrock Runtime Converse and ConverseStream APIs, which present a uniform request and response shape across the model families Bedrock hosts (Anthropic, Amazon Nova, Meta Llama, Mistral, Cohere, AI21, and others). Responses are normalized into OpenAI-compatible chat completions. Flow:Documentation Index
Fetch the complete documentation index at: https://gomodel-docs-providers-restructure.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Client -> GoModel -> Amazon Bedrock
Before you start
- An AWS account with Bedrock access in your chosen region.
- Model access granted for the foundation models you want to call (Bedrock console → Model access).
- AWS credentials reachable through the standard AWS credential chain: env
vars,
AWS_PROFILE, IAM Identity Center, container or instance roles.
1. Set the AWS region
SetBEDROCK_BASE_URL to either an AWS region (recommended) or a fully
qualified Bedrock endpoint URL:
BEDROCK_BASE_URL is empty, GoModel falls back to AWS_REGION or
AWS_DEFAULT_REGION from the standard AWS environment.
2. Make AWS credentials available
Any method the AWS SDK supports works. The simplest for local testing:3. (Optional) Configure a model list
By default GoModel queries Bedrock’s control plane and lists all on-demand text models the account has access to. To restrict or extend the list — for example to pin inference profile IDs or custom model ARNs — setBEDROCK_MODELS:
CONFIGURED_PROVIDER_MODELS_MODE=fallback, the configured
list is used when Bedrock’s listing call is unavailable or empty. Set
CONFIGURED_PROVIDER_MODELS_MODE=allowlist to expose only the configured
models and skip the upstream ListFoundationModels call.
4. Start GoModel
5. Verify the model registry
- a
200 OK - Bedrock models with
owned_byset to the originating model vendor (e.g.anthropic,amazon), reflecting Bedrock’sProviderNamefield.
6. Verify Chat Completions
- a
200 OK - assistant content containing
ok
stream: true) and the Responses API (/v1/responses) work the
same way — Responses requests are bridged onto Converse internally.
Current status
What is integrated today:- Bedrock Runtime Converse and ConverseStream for chat and Responses
- automatic model discovery via
ListFoundationModels(on-demand text models) - configured model lists through
BEDROCK_MODELSor YAMLmodels:in fallback or allowlist mode
- Bedrock embeddings (the InvokeModel embedding path is model-specific and not yet wired up)
Troubleshooting
AccessDeniedExceptionor403The AWS principal lacks Bedrock permissions, or model access has not been granted in the Bedrock console for the requested model.ValidationException: ... on-demand throughput isn't supportedThe model requires an inference profile or provisioned throughput. SetBEDROCK_MODELS(or YAMLmodels:) to the inference profile ID instead of the base model ID.model registry has no modelsEither the region has no on-demand text models for the account, or the control-plane call failed. SetBEDROCK_MODELSto populate the registry from configuration, or check the IAM policy attached to the credentials.- Wrong region inferred
Set
BEDROCK_BASE_URLexplicitly. When passed a full endpoint URL, GoModel only extracts the region segment if the host ends in.amazonaws.com; custom endpoints should be paired withAWS_REGION.
References
- Bedrock Runtime Converse API: https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html
- Model access in the Bedrock console: https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html
- AWS credential chain: https://docs.aws.amazon.com/sdkref/latest/guide/standardized-credentials.html