Skip to main content

Documentation Index

Fetch the complete documentation index at: https://gomodel-docs-providers-restructure.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

GoModel is a good fit for Codex because Codex already targets the OpenAI Responses API. Flow: Codex -> GoModel -> upstream model provider

Before you start

  • Install Codex on your machine.
  • Choose a GoModel master key, for example change-me.
  • Make sure GoModel has the upstream provider key for the models you want to use.
You can keep using Codex with a ChatGPT subscription sign-in, but GoModel still needs a gateway credential from Codex and an upstream provider key of its own. In this guide, OPENAI_API_KEY=change-me is the GoModel master key that Codex sends to GoModel, not your OpenAI Platform key.

1. Run GoModel

Start GoModel with a master key and an OpenAI provider key:
docker run --rm -p 8080:8080 \
  -e GOMODEL_MASTER_KEY="change-me" \
  -e OPENAI_API_KEY="sk-..." \
  enterpilot/gomodel

2. Confirm the Responses API with curl

Before testing Codex itself, you can optionally verify that GoModel answers a normal Responses API request: This step is optional. If you are sure you have configured a valid OPENAI_API_KEY in GoModel, you can skip it and go straight to step 3.
curl -s http://localhost:8080/v1/responses \
  -H "Authorization: Bearer change-me" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4.1-mini",
    "input": "Reply with exactly ok",
    "max_output_tokens": 16
  }'
If the gateway is wired correctly, the response will contain ok.

3. Configure Codex to use GoModel

Use a Responses-based provider in your Codex config file:
model_provider = "gomodel"
model = "gpt-4.1-mini"

[model_providers.gomodel]
name = "GoModel"
base_url = "http://localhost:8080/v1"
env_key = "OPENAI_API_KEY"
wire_api = "responses"
Then export the GoModel master key for that provider:
export OPENAI_API_KEY=change-me
Codex 0.122.0 did not use the OPENAI_BASE_URL environment variable in local validation. Use the provider config above, or set openai_base_url in Codex config if you intentionally want to override the built-in OpenAI provider.

4. Run a Codex test prompt

codex exec -m gpt-4.1-mini 'Reply with exactly ok and no punctuation.'
The validated result was:
ok

DeepSeek V4

Codex sends POST /v1/responses. DeepSeek exposes chat completions instead of a native Responses API, so configure the first-class DeepSeek provider and let GoModel translate the request.
providers:
  deepseek:
    type: deepseek
    base_url: "https://api.deepseek.com"
    api_key: "${DEEPSEEK_API_KEY}"
If you previously configured DeepSeek as type: openai, change it to type: deepseek for Codex. The generic OpenAI provider forwards /responses upstream, while the DeepSeek provider translates /responses to /chat/completions. See the DeepSeek guide for the full reasoning effort mapping table (DeepSeek V4 only accepts high and max, so GoModel maps low and medium up to high). Then use the DeepSeek model name in Codex:
model_provider = "gomodel"
model = "deepseek-v4-pro"

[model_providers.gomodel]
name = "GoModel"
base_url = "http://localhost:8080/v1"
env_key = "OPENAI_API_KEY"
wire_api = "responses"

5. Check the traffic in GoModel

Open the GoModel dashboard audit logs: http://localhost:8080/admin/dashboard/audit This lets you confirm that Codex is reaching GoModel and inspect the full request and response trail. From the same dashboard, you can keep following your GoModel traffic and usage.

Current status

  • the recommended integration path is Codex custom provider -> standard http://localhost:8080/v1
  • Codex custom provider mode sends POST /v1/responses
  • Codex 0.122.0 sends an uncompressed JSON request body in this path, so the old --disable enable_request_compression workaround is no longer required
  • ChatGPT subscription sign-in can coexist with the custom provider, but the custom provider still requires the configured env_key

References

Validated on April 21, 2026

This guide was validated against:
  • a local GoModel instance on http://localhost:8080
  • Codex CLI 0.122.0
Local validation confirmed:
  • POST /v1/responses returned 200 OK with curl
  • codex exec returned ok through Codex -> GoModel -> OpenAI-compatible upstream
  • Codex sent plain JSON to POST /v1/responses; no Content-Encoding: zstd header was present
  • a ChatGPT-signed-in Codex session worked with the custom gomodel provider when OPENAI_API_KEY was set to the GoModel master key
  • the same custom provider failed without OPENAI_API_KEY, because the provider env_key is still required