This guide demonstrates how to build a webhook-triggered automation that eliminates manual data collation, providing instant post-test analysis the moment an experiment ends.
The workflow triggers when an Intelligems test ends, fetches the results, generates an AI-powered CRO analysis with strategic insights, and posts everything to a threaded Slack message where your team can discuss next steps.
AI can make mistakes. Always check it's work before sharing with a client.6
Softwares Used
To build this automated reporting pipeline, you will need the following tools:
n8n: The primary workflow automation platform used to connect APIs and schedule tasks.
Groq: A high-speed AI inference engine used to process test data and generate natural language reports.
Note: You can swap this for OpenAI or Anthropic if preferred, but this guide uses Groq for its free API tier.
Slack: The final destination where the AI-generated health checks and reports will be posted.
How to Create Your Final Test Results & Learning Report:
Back In Intelligems > Settings > Webhooks, create a webhook with the n8n production URL where the action type is "end experience". If an agency, do this for all your client accounts.
Node 2: Return API Key to Use Based Upon Organization ID
Add "Code" node
Select "Code in JavaScript"
Select "Run Once for All Items"
Update the below code so that org-id maps to match the Intelligems organization IDs of your clients (you can find these in Intelligems under Settings > General > Organization Settings), the name value is display name for your clients, and the api-key value is the Intelligems API Key for those clients. Then paste this code into the Code section in n8n.
// API Key mapping - configure your organization IDs, API keys, and client display names
const API_KEY_MAP = {
'org-id': {
apiKey: 'api-key',
name: 'Client Name Here'
},
'org-id': {
apiKey: 'api-key',
name: 'Client Name Here'
},
// Add more organization mappings here
};
// Extract experience ID
const experienceId = $input.item.json.body?.experience?.id;
// Extract organization ID
const organizationId = $input.item.json.body?.experience?.organizationId;
// Get the configuration for this organization
const orgConfig = API_KEY_MAP[organizationId];
const apiKey = orgConfig?.apiKey;
const clientName = orgConfig?.clientName;
// Return as an array with a single object
return [
{
json: {
experienceId,
apiKey,
organizationId,
clientName
}
}
];
Analyze the A/B test data below and create a Slack-formatted report that matches this EXACT structure:
*A/B TEST RESULTS: [Test Name]*
π *Test Overview*
β’ Traffic Split: [X] sessions per variant
β’ Conversions: [Variant 1] ([X] orders) | [Variant 2] ([X] orders)
π° *Performance Comparison*
For each variant, display results in this order:
*[Use actual variant name from data, e.g., $Off or %Off]:*
Revenue Per Visitor: [+/- X.XX%] (confidence: [XX%])
Conversion Rate: [+/- X.XX%] (confidence: [XX%])
Average Order Value: [+/- X.XX%] (confidence: [XX%])
Profit Per Visitor: [+/- X.XX%] (confidence: [XX%]) // include only if COGS data exists
Projected Monthly Impact: [+/- $X,XXX]
Sample: [X] orders from [X] sessions
π *Key Insights*
*Signal Strength:* [Strong / Moderate / Weak / Insufficient]
- Explain what the data pattern shows
- Identify which metrics are aligned vs. conflicting
- Note any segments or patterns worth investigating
*DECISION FRAMEWORK*
*Sample Adequacy:* [Met / Not Met]
β Minimum 7-day runtime including weekend
β Minimum 350 conversions per variant
β Minimum 8,000 sessions per variant
β *Decision:* [IMPLEMENT / ITERATE / ABANDON / EXTEND]
IMPLEMENT if:
- Sample adequacy met
- Primary metric shows β₯4% improvement
- Confidence level β₯85%
- Downside risk is acceptable (lower bound of credible interval β₯-2%)
ABANDON if:
- Sample adequacy met
- Primary metric shows β€-4% decline
- Confidence that variant is worse β₯85%
- Upside potential is negligible (upper bound β€+2%)
ITERATE if:
- Sample adequacy met
- Mixed signals (one key metric up, another down)
- OR improvement exists but confidence <85% and test passed planned end date
- Recommendation: [specific iteration to test next]
EXTEND if:
- Sample adequacy not met
- OR confidence between 70-85% with meaningful trend (β₯3% change)
- Estimated runway: [X] additional days needed
*Rationale:* [1-2 sentences explaining the decision]
π *View Full Results*
https://app.intelligems.io/experiment/{{ $json.variations[0].experienceId }}
Formatting Rules
- Use one asterisks * for bold text (e.g., *Variant A*)
- Round percentages to 2 decimal places (e.g., +3.47%, -2.03%)
- Round confidence scores to whole numbers (e.g., 87%, 92%)
- Format revenue with commas, no decimals (e.g., +$8,450, -$3,200)
- Use proper em dashes (β) for breaks in text
- Keep spacing consistent
- For confidence, use the p2bc value: if p2bc = 0.87, display as "87%"
- Include emojis exactly as shown: π π° π π β π
- Use bullet points with - symbol
- DO NOT include the word "text:" or any JSON formatting in your output
- Output only the formatted message content, no JSON wrapper
Decision Calculation Guidelines
Determining Confidence: Use p2bc (probability to beat control) directly as the confidence percentage
Estimating Extension Time:
- Calculate average daily session volume: total sessions Γ· days run
- Determine sessions needed for 8,000 minimum per variant
- For confidence building: if confidence is 70-75%, assume need 80% more data; if 75-85%, assume need 40% more data
- Convert to days and cap at 14 additional days maximum
- If >14 days needed, classify as ITERATE instead
Signal Strength Classification:
- Strong: confidence β₯85% and absolute change β₯4%
- Moderate: confidence 70-85% OR absolute change 3-4%
- Weak: confidence 50-70% OR absolute change 1-3%
- Insufficient: confidence <50% OR duration <7 days
Handling Multiple Variants:
- Report all variants separately
- In decision section, identify best-performing variant
- If multiple variants show promise, note this in rationale
- IMPORTANT: Use the actual variant names from the data (like "$Off", "%Off", "Variant A", etc.), do NOT use placeholder text like "[Variant Name]"
Test Name: {{ $json.experienceName }}
Input Data
{{ JSON.stringify($json, null, 2) }}
Client: {{ $('Code in JavaScript').item.json.clientName }}
Test: <https://app.intelligems.io/experiment/{{ $('Code in JavaScript').item.json.experienceId }}|{{ $('HTTP Request').item.json.experienceName }}> - has ended