Documentation Index
Fetch the complete documentation index at: https://gomodel-docs-providers-restructure.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Guardrails are a pipeline of rules that run before a request reaches any LLM provider. They can inspect, modify, or reject requests — giving you centralized control over every prompt that flows through GoModel. Guardrails work across all text-based endpoints:/v1/chat/completions/v1/responses
Guardrails for images, TTS, STT, and video models are planned as a separate
system and are not covered here.
Quick Start
Add aguardrails section to your config/config.yaml:
How It Works
- Messages are extracted from the incoming request into a normalized format
- The guardrails pipeline processes the messages (inject, modify, or reject)
- Modified messages are applied back to the original request
- The request continues to the LLM provider
/chat/completions and /responses.
Execution Order
Each guardrail has anorder value that controls when it runs:
- Same order → run in parallel (concurrently)
- Different order → run sequentially (ascending)
Configuration
Full Structure
Environment Variable
You can toggle guardrails without editing the config file:Rule Fields
| Field | Required | Description |
|---|---|---|
name | Yes | Human-readable identifier. Supports spaces and unicode, but not /. |
type | Yes | Guardrail type: system_prompt or llm_based_altering. |
user_path | No | Optional base user path for internal auxiliary guardrail requests. |
order | No | Execution order. Default 0. Same value = parallel, different = sequential. |
Guardrail Types
system_prompt
Adds, replaces, or decorates the system prompt on every request.
Settings
| Field | Required | Description |
|---|---|---|
mode | No | inject, override, or decorator. Default: inject. |
content | Yes | The system prompt text to apply. |
Modes
- inject
- override
- decorator
Adds a system message only if none exists. Existing system prompts are left untouched.Behavior:
- Request has no system prompt → adds one
- Request already has a system prompt → no change
llm_based_altering
Rewrites selected message roles by calling an auxiliary model before the main
provider request runs. This is useful for PII anonymization and other
prompt-preserving rewrites.
The default prompt is derived from LiteLLM’s data_anonymization guardrail,
so a minimal config acts as an anonymizing preprocessor.
Settings
| Field | Required | Description |
|---|---|---|
model | Yes | Auxiliary model selector used for the rewrite call. |
provider | No | Optional routing hint for model. |
prompt | No | Custom rewrite prompt. Defaults to the built-in anonymization prompt. |
roles | No | Message roles to rewrite. Default: ["user"]. |
skip_content_prefix | No | Skip rewriting when the trimmed message starts with this prefix. |
max_tokens | No | max_tokens for the auxiliary rewrite call. Default: 4096. |
llm_based_altering calls the auxiliary model, GoModel runs that call
through the normal translated request path in-process. That means ordinary
workflow selection, fallback, usage, audit, and cache behavior still apply.
The internal request uses:
- path:
/v1/chat/completions - user path:
{guardrail.user_path or caller user path}/guardrails/{guardrail name} - request origin:
guardrail
Example
Examples
Single Safety Guardrail
The simplest setup — add a safety prefix to every request:Multiple Guardrails in Parallel
Two guardrails running at the same order execute concurrently:Sequential Pipeline
Guardrails with different orders run one after another. Later groups see the output of earlier ones:Mixed Parallel and Sequential
How It Works With Different Endpoints
Guardrails operate on a normalized message format internally. The adaptation between API-specific request types and this format happens automatically:| Endpoint | System prompt source | User messages source |
|---|---|---|
/v1/chat/completions | messages with role: "system" | messages array |
/v1/responses | instructions field | input field |