Advanced Best Practices for Prompt Chaining in SmartForms
SmartForms support prompt chaining for multi-step AI workflows. Each step builds on earlier results. This improves structure and consistency.
Example workflow:
Generate a customer avatar.
Draft a value proposition.
Convert it into an email sequence.
How prompt chaining works
When a SmartForm has Prompt Instructions, it runs one model call. That call uses mapped form fields and any backend data sources.
You can add more prompt steps. Each step runs as a separate API call. Each step produces its own output.
Later steps can:
Reuse earlier outputs with @step_1, @step_2, and so on.
Run a new task with different instructions and context.
Reference earlier steps with @step_x
Use @step_1, @step_2, and so on inside later instructions. Treat each reference as raw text from the earlier output.
Example
Write a cold email introducing the new offer.
Use the customer avatar below.
Customer avatar:
@step_1
Best practices:
Add a label before the reference.
State what the referenced content represents.
Specify the output format you want.
Avoid vague instructions like:
Prefer explicit instructions like:
Prompt chaining design patterns
Define clear stages
Split the workflow into small steps. Keep each step focused.
Common staging:
Step 1: Generate core data (avatar, outline, research summary).
Step 2: Generate the primary asset (copy, plan, script).
Step 3: Refine and format (tone, structure, length).
Step 4: Repurpose (emails, ads, bullets, slides).
Make each step self-contained
Do not assume the model remembers your intent. Restate the goal of the step. Explain what the referenced content contains.
Example:
Reuse SmartForm fields in later steps
You can mix @step_x outputs with mapped fields like @q1answer.
Token, cost, and timeout considerations
Each prompt step is a separate API call. Each call has its own token budget. Large outputs early can reduce quality later. They also increase cost and latency.
Practical guidance:
Use GPT-5.2 for deeper reasoning and better formatting.
Keep step outputs short when they will be reused downstream.
Avoid repeating large backend content in every step.
Avoid stacking web browsing, scraping, and large multi-step chains.
Common symptoms of oversized chains:
Truncated outputs.
A step failing mid-run.
The full SmartForm timing out.
Data source considerations
Backend data sources can be expensive in tokens. Referencing them in many steps increases usage.
Recommended pattern:
Inject the data source in step 1.
Summarize or extract what you need.
Pass the summary downstream via @step_1.
Checklist
Break the workflow into clear steps.
Add context labels before @step_x references.
Keep reusable step outputs concise.
Pick a model that matches your token needs.
Avoid re-ingesting large data sources across steps.