Skip to main content

Documentation Index

Fetch the complete documentation index at: https://gomodel-docs-providers-restructure.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

GoModel routes to DeepSeek through DeepSeek’s chat completions API. DeepSeek does not expose a native Responses API, so GoModel translates /v1/responses requests to /chat/completions automatically.

1. Configure DeepSeek

Env-only is enough:
export DEEPSEEK_API_KEY="..."
Or in config.yaml (not recommended):
providers:
  deepseek:
    type: deepseek
    base_url: "https://api.deepseek.com"
    api_key: "${DEEPSEEK_API_KEY}"
If you previously configured DeepSeek as type: openai, switch it to type: deepseek so GoModel translates /v1/responses and remaps reasoning effort. The generic OpenAI provider does neither.

2. Reasoning effort mapping

DeepSeek V4 reasoning models accept reasoning_effort as a top-level string with two levels: high and max. GoModel accepts the standard OpenAI-style levels and remaps them:
Client sendsDeepSeek receives
lowhigh
mediumhigh
highhigh
xhigh / maxmax
anything elsepassed through
low and medium are mapped up to high because DeepSeek’s reasoning models do not accept lower levels. If you want to avoid reasoning entirely, omit the reasoning field instead of passing low. GoModel rewrites the request body so DeepSeek sees a top-level reasoning_effort: "..." instead of OpenAI’s nested "reasoning": {"effort": "..."} shape. No client change is required.

Current support

Integrated:
  • chat completions and streaming
  • Responses API and streaming (translated to chat completions)
  • model listing through /models
Not supported by DeepSeek:
  • embeddings (returns an invalid_request_error)