GoModel routes to DeepSeek through DeepSeek’s chat completions API. DeepSeek does not expose a native Responses API, so GoModel translatesDocumentation Index
Fetch the complete documentation index at: https://gomodel-docs-providers-restructure.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
/v1/responses
requests to /chat/completions automatically.
1. Configure DeepSeek
Env-only is enough:config.yaml (not recommended):
If you previously configured DeepSeek as
type: openai, switch it to type: deepseek so GoModel translates /v1/responses and remaps reasoning effort.
The generic OpenAI provider does neither.2. Reasoning effort mapping
DeepSeek V4 reasoning models acceptreasoning_effort as a top-level string
with two levels: high and max. GoModel accepts the standard OpenAI-style
levels and remaps them:
| Client sends | DeepSeek receives |
|---|---|
low | high |
medium | high |
high | high |
xhigh / max | max |
| anything else | passed through |
low and medium are mapped up to high because DeepSeek’s reasoning models
do not accept lower levels. If you want to avoid reasoning entirely, omit the
reasoning field instead of passing low.
GoModel rewrites the request body so DeepSeek sees a top-level
reasoning_effort: "..." instead of OpenAI’s nested
"reasoning": {"effort": "..."} shape. No client change is required.
Current support
Integrated:- chat completions and streaming
- Responses API and streaming (translated to chat completions)
- model listing through
/models
- embeddings (returns an
invalid_request_error)