OpenAI GPT - Synthreo Builder
OpenAI GPT node for Builder - send prompts to OpenAI language models (GPT-4o, GPT-4, GPT-3.5 Turbo) and receive AI-generated text responses within your agent workflow.
Integrate powerful AI models from OpenAI (GPT-4, GPT-3.5, GPT-5, and specialized models) or Azure OpenAI directly into your Builder workflows. This node can generate text, answer questions, analyze content, and perform complex reasoning tasks using your business data.
Quick Summary
Section titled “Quick Summary”- Variants:
OpenAIandAzure OpenAI - Primary Purpose: Generate text, answer questions, analyze content, perform reasoning, and call functions/tools.
- Key Difference: While both nodes behave the same inside ThreoAI, their authentication differs and so does their usage outside the platform.
- OpenAI: API Token only
- Azure OpenAI: Resource Endpoint + API Key + API Version + Deployment Name
The difference mainly comes down to hosting, compliance, and access. OpenAI’s API is hosted directly by OpenAI, offering the latest models first and using a simple API key for authentication, with billing tied to an OpenAI account. Azure OpenAI, by contrast, runs on Microsoft Azure infrastructure in your chosen region, requires you to create model deployments (referenced by Deployment Name), and integrates with Azure subscription billing. While model availability may lag slightly compared to OpenAI direct, Azure provides stronger enterprise compliance (HIPAA, GDPR, FedRAMP) and regional data residency controls, making it the preferred option for organizations with strict governance requirements.
Authentication
Section titled “Authentication”OpenAI Authentication
Section titled “OpenAI Authentication”- API Token: Required
- Configured in: ThreoAI Credentials - OpenAI
Azure OpenAI Authentication
Section titled “Azure OpenAI Authentication”- Resource Endpoint: e.g.,
https://<resource>.openai.azure.com/(required) - API Key: (required)
- API Version: (required)
- Deployment Name: (required)
- Configured in: ThreoAI Credentials - Azure OpenAI
Node Purpose
Section titled “Node Purpose”This node connects to OpenAI/Azure OpenAI to:
- Generate text
- Answer questions
- Analyze and classify content
- Perform reasoning tasks on business data
- Call tools/functions as part of multipath workflows
Configuration Settings
Section titled “Configuration Settings”The OpenAI and Azure OpenAI node is configured through its settings panel.
Required Settings (for basic use)
Section titled “Required Settings (for basic use)”- Authentication - OpenAI or Azure OpenAI credentials
- Model - The chosen model (OpenAI) or deployment (Azure)
- System Message - Sets behavior and role of the assistant
- Prompt Editor - Typically holds the user prompt or data from another node
Referencing Variables in Prompts
Section titled “Referencing Variables in Prompts”Both the System Message and Prompt Editor fields support variable interpolation using double curly brace syntax. Variables from upstream nodes are referenced as {{Out.nodeId}} where nodeId is the Output Column Name of the upstream node. The Input Data node’s default output is referenced as {{userSays}} (without the Out. prefix when used directly in prompts, though {{Out.userSays}} also works).
Examples of variable usage in a prompt:
{{userSays}}- the user’s input message from the Input Data node{{Out.customerData}}- the output of a node whose Output Column Name iscustomerData{{Out.gpt_response}}- the output of a previous LLM node namedgpt_response
You can combine static text and variables freely in both the System Message and Prompt fields.
Output Variable
Section titled “Output Variable”The node produces a response stored under the property name defined in the Output settings (default: gpt_response). Downstream nodes reference this as {{Out.gpt_response}} (substituting your chosen name if different). The Output Data node would use {{Out.gpt_response}} to render the AI response to the end user.
Model Parameters
Section titled “Model Parameters”| Parameter | Type | Default | Description |
|---|---|---|---|
| Temperature | Number (0.0 - 2.0) | 1.0 | Controls randomness. Lower values produce more predictable output; higher values produce more creative output. |
| Max Tokens | Number | Model default | Maximum number of tokens in the response. Limits response length and controls cost. |
| Response Format | Dropdown | Text | Text for natural language output; JSON Object for structured JSON responses. Use JSON Object with explicit JSON instructions in the prompt for reliable structured output. |
Conversation Memory Settings
Section titled “Conversation Memory Settings”| Setting | Default | Description |
|---|---|---|
| Pass on Previous Messages | Off | When enabled, the node includes prior conversation turns in the request so the AI retains context across multiple exchanges. Enable this for chatbot scenarios where the user expects follow-up questions to work. Disabled means each request is processed independently, which is suitable for single-turn tasks. |
| Use Connected BotX Chatbot | Off | When enabled, the node automatically uses the conversation history from the connected BotX chat widget. This simplifies chatbot development because you do not need to manually wire up a history source. Enable this whenever the agent is published as a BotX chatbot. |
| History Property Source | - | Defines which property carries conversation history when Use Connected BotX Chatbot is off. Selecting Use Connected BotX Chatbot automatically populates this field. |
Enabling Pass on Previous Messages increases token usage per request because the full conversation history is sent each time. For long conversations this can affect cost and response latency.
Reasoning Model Settings (GPT-5 and similar)
Section titled “Reasoning Model Settings (GPT-5 and similar)”These settings appear when a reasoning-capable model is selected.
| Setting | Type | Default | Description |
|---|---|---|---|
| Reasoning Effort | Dropdown: Low / Medium / High | Medium | Controls how much computational effort the model applies to reasoning before responding. Use High for complex multi-step problems, Low for simple classification or lookup tasks. Higher effort improves accuracy but increases response time and cost. |
| Verbosity Level | Dropdown: Low / Medium / High | Medium | Controls response length and detail. Low produces brief summaries; High produces thorough explanations. Adjust based on whether the consumer of the output is a human reading a summary or a downstream node expecting detailed structured text. |
Tools Configuration
Section titled “Tools Configuration”The Tools section lets you define function schemas that the model can choose to call. When the model decides a tool call is appropriate, it returns a structured response containing the function name and parameters rather than plain text. A downstream OpenAI Tool Eval node reads that response and extracts the parameters.
To enable tool calling:
- Define one or more tool schemas in the Tools section.
- Disable Omit in the Output settings so the full raw response (including
tool_calls) is preserved for the Tool Eval node. - Add an OpenAI Tool Eval node downstream and set its Full Response Property Name to match the output name of this node (default:
gpt_response).
RAG Training Settings
Section titled “RAG Training Settings”RAG (Retrieval-Augmented Generation) allows the model to retrieve relevant context from a knowledge base before generating a response.
| Setting | Options / Type | Default | Description |
|---|---|---|---|
| Training Style | Questions & Answers / Text Documents | Empty | Format of training data. Q&A pairs are suited for structured FAQ-style knowledge. Text Documents accept raw unstructured content that is chunked and embedded for retrieval. |
| Embedding Model | Dropdown | - | The model used to convert text into vector embeddings. Affects retrieval accuracy and relevance. Choose a model that matches the language and domain of your training data. |
| Training Mode | Full Training / Rebuild Embeddings / Rebuild Index Only / Fetch Data Only | Full Training | Controls how much of the training pipeline runs. Full Training reprocesses everything. Partial modes save time for incremental updates when only part of the data has changed. |
RAG Inference Settings
Section titled “RAG Inference Settings”| Setting | Options / Type | Default | Description |
|---|---|---|---|
| Distance Function | Cosine / Euclidean / Manhattan / Chebyshev | Cosine | Similarity metric for comparing query embeddings to stored document embeddings. Cosine similarity works well for most natural language tasks. |
| Minimum Confidence Threshold | Number (0.0 - 1.0) | 0.0 | Filters out retrieved chunks whose similarity score falls below this value. Raise this to reduce irrelevant context being injected into the prompt; lower it if relevant content is being missed. |
| Top N Contexts | Number | 0 (automatic) | How many retrieved document chunks to inject into the prompt. More contexts provide broader coverage but increase token usage and cost. |
Output Settings
Section titled “Output Settings”| Setting | Default | Description |
|---|---|---|
| Output Property Name | gpt_response | The name of the variable that holds this node’s response. Referenced downstream as {{Out.gpt_response}} (or your chosen name). |
| Omit | On | When enabled, strips the raw API response envelope and returns only the text content. Disable Omit when chaining to an OpenAI Tool Eval node, because the Tool Eval node needs the raw tool_calls object from the full response. |
| Sources | Disabled | When enabled, the node includes source references from the RAG retrieval in its response. Useful when users need to trace which knowledge base documents were used. |
Example Usage
Section titled “Example Usage”Example 1 - Conversational Chatbot
Section titled “Example 1 - Conversational Chatbot”- Add an Input Data node (default output:
userSays). - Add this node. Set the System Message to define the assistant’s persona and scope.
- In the Prompt Editor, reference
{{userSays}}to pass the user’s message to the model. - Enable Pass on Previous Messages and Use Connected BotX Chatbot.
- Add an Output Data node referencing
{{Out.gpt_response}}.
Example 2 - Function Calling with ToolEval
Section titled “Example 2 - Function Calling with ToolEval”- Define a tool schema in the Tools section (e.g.,
getWeatherwith acityparameter). - Disable Omit in Output so
tool_callsare included in the raw response. - Add an OpenAI Tool Eval node downstream, set Full Response Property Name to
gpt_response, and set Match Function Name togetWeather. - Route tool calls using the Tool Eval node’s output to a node that handles the weather lookup.
Example 3 - RAG Workflow
Section titled “Example 3 - RAG Workflow”- Configure RAG Training with Training Style and Embedding Model.
- Click Training Model to run the training pipeline.
- At inference time, set Top N Contexts and Minimum Confidence Threshold to control retrieval quality.
- The model receives retrieved context alongside the user prompt and generates a grounded response.
- Enable Sources if users need to see which documents were referenced.
Best Practices
Section titled “Best Practices”- Always configure a clear System Message that defines the assistant’s role, tone, and boundaries.
- Use JSON Object response format combined with explicit JSON instructions in the prompt when downstream nodes expect structured data.
- Disable Omit only when chaining to a Tool Eval node - leaving it enabled for all other cases keeps output clean.
- Pin a specific model or deployment to avoid unexpected behavior changes when new model versions are released.
- Enable Sources when building knowledge-base assistants where traceability matters.
- Keep tool parameter schemas strict and well-documented so the model reliably calls tools with the correct parameters.
- For single-turn tasks (classification, extraction, summarization), leave Pass on Previous Messages disabled to reduce token usage.