Skip to content
synthreo.ai

System Prompts in ThreoAI

System prompts in ThreoAI - learn what a system prompt is, how it shapes Custom GPT behavior, and best practices for building reliable, compliant AI assistants.

This guide explains what a system prompt is, why it matters, and best practices for writing effective system prompts in ThreoAI.

For steps on where to enter a system prompt, see Models & API Keys.


The System Prompt is the foundational instruction that sits at the model level (e.g., GPT-5, GPT-5 Thinking). It runs before anything else and provides the base rules for how the model operates.

Unlike Custom GPT Instructions, the System Prompt is not where you give the model a persona (Sales Coach, Paralegal, QA Agent). Personas belong in Custom GPT Instructions, which layer on top of the system prompt.

The System Prompt defines rules and behaviors that apply universally across all Custom GPTs created from that model.

  • Compliance and Safety Rules - Never reveal PII. Always refuse disallowed content.
  • Capabilities Control - Do not search the web unless explicitly enabled.
  • Reasoning and Style Defaults - Think step by step. Maintain a professional tone unless overridden.
LayerPurposeScope
System PromptGlobal rules and base behaviorsAll Custom GPTs on the model
Custom GPT InstructionsPersona and task-specific behaviorOne specific Custom GPT
User PromptLive request from the end userOne conversation turn

The System Prompt is the most strategic place to set rules because it applies to every Custom GPT on that model:

  • Consistency - Every GPT inherits the same baseline rules regardless of persona
  • Compliance - Add safety rules once and they apply everywhere
  • Efficiency - Define guardrails once instead of repeating them in every Custom GPT
  • Capability Control - Restrict or enable features at the model level
  • Default Reasoning - Establish how the model thinks when no persona overrides are present
  • All GPTs should automatically redact sensitive data - add the rule once in the system prompt
  • Disable web search by default, enable it only for specific GPTs - block in system prompt, override in Custom GPT Instructions
  • Enforce professional tone across all client-facing GPTs - define globally in the system prompt

When a user interacts with a Custom GPT, ThreoAI processes three layers in order:

System Prompt → Custom GPT Instructions → User Prompt → Model Response

  • The System Prompt provides universal guardrails
  • The Custom GPT Instructions define the persona and use case
  • The User Prompt gives the specific task

Analogy:

  • System Prompt = Company policy manual
  • Custom GPT Instructions = Employee job description
  • User Prompt = A task assigned during the workday

  • Be concise and clear - overly long prompts can confuse the model
  • Start with role and mission - define what the GPT should do (e.g., “Provide financial insights for Banana Inc”)
  • Set the tone - indicate if replies should be professional, client-friendly, or casual
  • Avoid contradictions - don’t say “Be formal” and “Be casual” in the same prompt
  • Iterate and test - try different prompts and evaluate how they affect output

Internal Company Assistant:

You are Banana Inc’s internal assistant. Provide concise answers to employee questions and ensure your responses are clear and accurate.

Client-Facing Support:

You are Synthreo GPT. Assist clients by explaining product features and answering questions in a professional, customer-friendly tone.

Analytics Use Case:

You are Banana Inc GPT. Provide clear, data-driven insights based on internal analytics. Explain results in a way non-technical staff can understand.

You are a friendly tutor who helps students with math homework.

Avoid persona-driven prompts in the system message. Anchor your GPT to your company or project. Personas belong in Custom GPT Instructions.


  • Do not include sensitive data or credentials in a system prompt
  • Keep compliance requirements (tone, disclaimers, safety notes) in the prompt if your use case requires them
  • Review prompts regularly to ensure they align with company or client policies

IssueCauseFix
Inconsistent outputsContradictory instructionsReview prompt for conflicting rules
Model ignores later instructionsPrompt is too longShorten the system prompt, move details to Custom GPT Instructions
Unexpected behaviorSystem prompt conflicts with Custom GPT InstructionsEnsure the system prompt sets guardrails only, not persona
Prompt changes not taking effectBrowser cache or open conversationStart a new conversation after saving changes

Can I edit the system prompt later? Yes. Go to Models & API Keys and edit the model’s System Message field.

Does the system prompt override user prompts? Yes, it takes precedence. User prompts drive the interaction within the boundaries set by the system prompt.

How is this different from Custom GPT Instructions? System prompts define global rules. Custom GPT Instructions define persona-specific behavior for one GPT.

Can I reuse prompts across models? Yes. You can copy and adapt system prompts across multiple model configurations.

What if my system prompt is too long? Long prompts consume more tokens, leaving less space for conversation history. The model may “forget” earlier parts of the conversation faster. Keep system prompts under 500 tokens when possible.

Can I leave the System Message blank? Yes. If the System Message is left blank, the model operates without a global system-level instruction. Each Custom GPT’s own Instructions then serve as the primary behavioral guide. This is not recommended for production environments where compliance or safety rules are required.


MetricApproximate Value
1 token~4 characters
100 tokens~75 words
1,000 tokens~750 words (~1.5 - 2 pages)
500-token system prompt in 16k contextLeaves ~15.5k tokens for instructions + conversation

Every additional 100 tokens in the system prompt increases average latency by approximately 30 - 60ms.