Advanced Prompt Chaining Guide
Advanced Best Practices for Prompt Chaining in SmartForms
FormWise SmartForms allow you to generate dynamic, multi-step AI workflows using Prompt Chaining. This feature is powerful for creating layered logic—where each step builds on the previous one to produce richer, more structured outputs.
Think: First generate a customer avatar → then write a value proposition → then turn it into an email campaign. All from a single SmartForm submission.
🧠 How Prompt Chaining Works
When your SmartForm has Prompt Instructions set, the AI will generate a response using your mapped form fields and any backend data sources you’ve added.
You can then add additional prompt steps, each one creating its own API call to the AI engine. These steps can either:
Re-use previous outputs (
@step_1
,@step_2
, etc.)Perform a new generation task using different instructions or context
Each step is treated as a separate AI output, giving you maximum flexibility.
🔁 Referencing Previous Steps (Using @step_x
)
@step_x
)You can reference outputs from previous steps using @step_1
, @step_2
, etc. in your instructions.
Example:
💡 Best Practice: Always provide context before inserting a previous step.
❌ "Use @step_1 to generate the copy."
✅ "Using the brand voice below, write a Google ad headline. Brand Voice: @step_1"
🧱 Prompt Chaining Strategy Tips
1️⃣ Define Clear Stages
Break down your generation process into logical steps:
Step 1: Generate avatar or data
Step 2: Use avatar to write copy
Step 3: Refine or format output
Step 4: Summarize or repackage as bullets, slides, emails, etc.
2️⃣ Add Structure to Each Instruction
Treat each prompt like a self-contained module. Don’t assume the AI knows what @step_1
is—you need to explain it:
✅ “Based on the tone guide below, rewrite the value proposition in a more emotional tone. Tone Guide: @step_1”
3️⃣ Re-use SmartForm Fields in Later Steps
You can use both @step_1
and mapped fields like @q1answer
in the same prompt.
Example:
⚠️ Token Usage Warning
Each prompt step is a separate API call and has its own token budget.
Token Considerations:
Use GPT-4o or GPT-3.5 16K for larger steps
Keep inputs concise—large outputs from one step can eat into the next step’s limit
Avoid repeating full data source content in every step
Do not combine Web Browsing + Web Scraping + multiple large steps—this increases timeout risk
🚫 Excessive token use may result in:
Incomplete outputs
Step failures
Full SmartForm timeout
🔄 Data Source Considerations
If you’re referencing @step_1
and using backend data sources, remember:
Every step that references a data source re-ingests it, increasing token consumption
You can inject backend data at the first step only and pass the result downstream for efficiency
📌 Summary Checklist
✅ Break your logic into clear, focused steps
✅ Always provide context for @step_1
, @step_2
, etc.
✅ Keep outputs clean and concise
✅ Use GPT-4o or GPT-3.5 16K for multi-step SmartForms
✅ Avoid reloading large data sources at each step
✅ Test each step independently before going live
Last updated
Was this helpful?