System Prompts in ThreoAI
System prompts in ThreoAI - learn what a system prompt is, how it shapes Custom GPT behavior, and best practices for building reliable, compliant AI assistants.
This guide explains what a system prompt is, why it matters, and best practices for writing effective system prompts in ThreoAI.
For steps on where to enter a system prompt, see Models & API Keys.
What is a System Prompt
Section titled “What is a System Prompt”The System Prompt is the foundational instruction that sits at the model level (e.g., GPT-5, GPT-5 Thinking). It runs before anything else and provides the base rules for how the model operates.
Unlike Custom GPT Instructions, the System Prompt is not where you give the model a persona (Sales Coach, Paralegal, QA Agent). Personas belong in Custom GPT Instructions, which layer on top of the system prompt.
The System Prompt defines rules and behaviors that apply universally across all Custom GPTs created from that model.
What Belongs in a System Prompt
Section titled “What Belongs in a System Prompt”- Compliance and Safety Rules - Never reveal PII. Always refuse disallowed content.
- Capabilities Control - Do not search the web unless explicitly enabled.
- Reasoning and Style Defaults - Think step by step. Maintain a professional tone unless overridden.
Comparison
Section titled “Comparison”| Layer | Purpose | Scope |
|---|---|---|
| System Prompt | Global rules and base behaviors | All Custom GPTs on the model |
| Custom GPT Instructions | Persona and task-specific behavior | One specific Custom GPT |
| User Prompt | Live request from the end user | One conversation turn |
Why Use a System Prompt
Section titled “Why Use a System Prompt”The System Prompt is the most strategic place to set rules because it applies to every Custom GPT on that model:
- Consistency - Every GPT inherits the same baseline rules regardless of persona
- Compliance - Add safety rules once and they apply everywhere
- Efficiency - Define guardrails once instead of repeating them in every Custom GPT
- Capability Control - Restrict or enable features at the model level
- Default Reasoning - Establish how the model thinks when no persona overrides are present
Example Scenarios
Section titled “Example Scenarios”- All GPTs should automatically redact sensitive data - add the rule once in the system prompt
- Disable web search by default, enable it only for specific GPTs - block in system prompt, override in Custom GPT Instructions
- Enforce professional tone across all client-facing GPTs - define globally in the system prompt
How System Prompts Work at Runtime
Section titled “How System Prompts Work at Runtime”When a user interacts with a Custom GPT, ThreoAI processes three layers in order:
System Prompt → Custom GPT Instructions → User Prompt → Model Response
- The System Prompt provides universal guardrails
- The Custom GPT Instructions define the persona and use case
- The User Prompt gives the specific task
Analogy:
- System Prompt = Company policy manual
- Custom GPT Instructions = Employee job description
- User Prompt = A task assigned during the workday
Tips & Best Practices
Section titled “Tips & Best Practices”- Be concise and clear - overly long prompts can confuse the model
- Start with role and mission - define what the GPT should do (e.g., “Provide financial insights for Banana Inc”)
- Set the tone - indicate if replies should be professional, client-friendly, or casual
- Avoid contradictions - don’t say “Be formal” and “Be casual” in the same prompt
- Iterate and test - try different prompts and evaluate how they affect output
Good Examples
Section titled “Good Examples”Internal Company Assistant:
You are Banana Inc’s internal assistant. Provide concise answers to employee questions and ensure your responses are clear and accurate.
Client-Facing Support:
You are Synthreo GPT. Assist clients by explaining product features and answering questions in a professional, customer-friendly tone.
Analytics Use Case:
You are Banana Inc GPT. Provide clear, data-driven insights based on internal analytics. Explain results in a way non-technical staff can understand.
What to Avoid
Section titled “What to Avoid”You are a friendly tutor who helps students with math homework.
Avoid persona-driven prompts in the system message. Anchor your GPT to your company or project. Personas belong in Custom GPT Instructions.
Security & Compliance
Section titled “Security & Compliance”- Do not include sensitive data or credentials in a system prompt
- Keep compliance requirements (tone, disclaimers, safety notes) in the prompt if your use case requires them
- Review prompts regularly to ensure they align with company or client policies
Troubleshooting
Section titled “Troubleshooting”| Issue | Cause | Fix |
|---|---|---|
| Inconsistent outputs | Contradictory instructions | Review prompt for conflicting rules |
| Model ignores later instructions | Prompt is too long | Shorten the system prompt, move details to Custom GPT Instructions |
| Unexpected behavior | System prompt conflicts with Custom GPT Instructions | Ensure the system prompt sets guardrails only, not persona |
| Prompt changes not taking effect | Browser cache or open conversation | Start a new conversation after saving changes |
Can I edit the system prompt later? Yes. Go to Models & API Keys and edit the model’s System Message field.
Does the system prompt override user prompts? Yes, it takes precedence. User prompts drive the interaction within the boundaries set by the system prompt.
How is this different from Custom GPT Instructions? System prompts define global rules. Custom GPT Instructions define persona-specific behavior for one GPT.
Can I reuse prompts across models? Yes. You can copy and adapt system prompts across multiple model configurations.
What if my system prompt is too long? Long prompts consume more tokens, leaving less space for conversation history. The model may “forget” earlier parts of the conversation faster. Keep system prompts under 500 tokens when possible.
Can I leave the System Message blank? Yes. If the System Message is left blank, the model operates without a global system-level instruction. Each Custom GPT’s own Instructions then serve as the primary behavioral guide. This is not recommended for production environments where compliance or safety rules are required.
Token Budget Reference
Section titled “Token Budget Reference”| Metric | Approximate Value |
|---|---|
| 1 token | ~4 characters |
| 100 tokens | ~75 words |
| 1,000 tokens | ~750 words (~1.5 - 2 pages) |
| 500-token system prompt in 16k context | Leaves ~15.5k tokens for instructions + conversation |
Every additional 100 tokens in the system prompt increases average latency by approximately 30 - 60ms.
Related Guides
Section titled “Related Guides”- Models & API Keys - where to enter and manage system prompts
- Creating Custom GPTs - how to build Custom GPTs that layer on top of system prompts