Skip to main content

Documentation Index

Fetch the complete documentation index at: https://gomodel-docs-providers-restructure.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

GoModel routes OpenAI-compatible requests to many AI providers through a single gateway. Most providers work out of the box once you set their API key — set the env var, start GoModel, and call /v1/chat/completions or /v1/responses as usual. The pages in this section exist for providers whose setup is not purely “set an API key”: auth flows that pull from cloud credentials, deployment-based URLs, region/project requirements, dual native + OpenAI-compatible API surfaces, or other quirks worth documenting. If a provider isn’t listed separately, its default configuration is enough.

Supported providers

ProviderCredentialGuide
OpenAIOPENAI_API_KEY
AnthropicANTHROPIC_API_KEY
Google GeminiGEMINI_API_KEYGoogle Gemini
Google Vertex AIVERTEX_PROJECT + VERTEX_LOCATION + GCP credentialsGoogle Vertex AI
DeepSeekDEEPSEEK_API_KEYDeepSeek
GroqGROQ_API_KEY
OpenRouterOPENROUTER_API_KEY
Z.aiZAI_API_KEY (ZAI_BASE_URL optional)
xAI (Grok)XAI_API_KEY
MiniMaxMINIMAX_API_KEY (MINIMAX_BASE_URL optional)
Azure OpenAIAZURE_API_KEY + AZURE_BASE_URL (AZURE_API_VERSION optional)Azure OpenAI
Amazon BedrockBEDROCK_BASE_URL (region or endpoint) + AWS credentialsAmazon Bedrock
Oracle GenAIORACLE_API_KEY + ORACLE_BASE_URLOracle GenAI
OllamaOLLAMA_BASE_URLOllama
vLLMVLLM_BASE_URL (VLLM_API_KEY optional)vLLM
See the README provider table for per-provider feature support (chat, Responses, embeddings, files, batches, passthrough).

Why some providers have dedicated pages

These are the providers most users hit friction on:
  • Google Vertex AI — needs a GCP project, region, and either Application Default Credentials or a service-account JSON key. Multi-region or multi-account setups use suffixed env vars.
  • Amazon Bedrock — no API key of its own; authenticates through the AWS credential chain (env, profile, IAM Identity Center, instance/container roles). Requires explicit model access in the Bedrock console.
  • Azure OpenAI — deployment-scoped base URLs, the api-version query parameter, and the api-key header instead of Authorization: Bearer.
  • Oracle GenAI — requires an OCI IAM policy for generativeaiapikey and a region-specific OpenAI-compatible endpoint URL.
  • Google Gemini (AI Studio) — two routing modes (native generateContent vs OpenAI-compatible) with different image-input behavior.
  • DeepSeek — reasoning effort mapping quirks for DeepSeek V4.
  • Ollama / vLLM — local-model hosting with optional multi-instance setup through suffixed env vars and provider-qualified model IDs.

Providers without a dedicated page

OpenAI, Anthropic, Groq, OpenRouter, Z.ai, xAI, and MiniMax follow the same pattern: set the API key (and optional base URL where supported), start GoModel, and route by model ID. The full env-var reference for every provider lives in Configuration. If you hit something unexpected with a provider that doesn’t have a guide here, that’s worth a bug report — chances are it’s a quirk we should document.