GoModel is a good fit for Codex because Codex already targets the OpenAI Responses API. Flow:Documentation Index
Fetch the complete documentation index at: https://gomodel-docs-providers-restructure.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Codex -> GoModel -> upstream model provider
Before you start
- Install Codex on your machine.
- Choose a GoModel master key, for example
change-me. - Make sure GoModel has the upstream provider key for the models you want to use.
You can keep using Codex with a ChatGPT subscription sign-in, but GoModel
still needs a gateway credential from Codex and an upstream provider key of
its own. In this guide,
OPENAI_API_KEY=change-me is the GoModel master key
that Codex sends to GoModel, not your OpenAI Platform key.1. Run GoModel
Start GoModel with a master key and an OpenAI provider key:2. Confirm the Responses API with curl
Before testing Codex itself, you can optionally verify that GoModel answers a normal Responses API request: This step is optional. If you are sure you have configured a validOPENAI_API_KEY in GoModel, you can skip it and go straight to
step 3.
ok.
3. Configure Codex to use GoModel
Use a Responses-based provider in your Codex config file:Codex
0.122.0 did not use the OPENAI_BASE_URL environment variable in
local validation. Use the provider config above, or set openai_base_url in
Codex config if you intentionally want to override the built-in OpenAI
provider.4. Run a Codex test prompt
DeepSeek V4
Codex sendsPOST /v1/responses. DeepSeek exposes chat completions instead of
a native Responses API, so configure the first-class DeepSeek provider and let
GoModel translate the request.
type: openai, change it to
type: deepseek for Codex. The generic OpenAI provider forwards /responses
upstream, while the DeepSeek provider translates /responses to
/chat/completions.
See the DeepSeek guide for the full reasoning effort
mapping table (DeepSeek V4 only accepts high and max, so GoModel maps
low and medium up to high).
Then use the DeepSeek model name in Codex:
5. Check the traffic in GoModel
Open the GoModel dashboard audit logs: http://localhost:8080/admin/dashboard/audit This lets you confirm that Codex is reaching GoModel and inspect the full request and response trail. From the same dashboard, you can keep following your GoModel traffic and usage.Current status
- the recommended integration path is Codex custom provider -> standard
http://localhost:8080/v1 - Codex custom provider mode sends
POST /v1/responses - Codex
0.122.0sends an uncompressed JSON request body in this path, so the old--disable enable_request_compressionworkaround is no longer required - ChatGPT subscription sign-in can coexist with the custom provider, but the
custom provider still requires the configured
env_key
References
- OpenAI Codex discussion: Deprecating
chat/completionssupport in Codex - OpenAI Codex repository: openai/codex
Validated on April 21, 2026
This guide was validated against:- a local GoModel instance on
http://localhost:8080 - Codex CLI
0.122.0
POST /v1/responsesreturned200 OKwithcurlcodex execreturnedokthroughCodex -> GoModel -> OpenAI-compatible upstream- Codex sent plain JSON to
POST /v1/responses; noContent-Encoding: zstdheader was present - a ChatGPT-signed-in Codex session worked with the custom
gomodelprovider whenOPENAI_API_KEYwas set to the GoModel master key - the same custom provider failed without
OPENAI_API_KEY, because the providerenv_keyis still required