Build a Multi-Armed Bandit for Dynamic Traffic Optimization
Learn how to implement a Multi-Armed Bandit (MAB) workflow using the Intelligems API to automate traffic allocation, minimize opportunity cost, and maximize revenue during live experiments.
Overview:
Multi-armed bandit (MAB) testing moves beyond static A/B splits by dynamically optimizing your traffic in real-time. Instead of waiting weeks for a test to conclude, a MAB approach identifies high-performing variants early and automatically shifts traffic toward them.
This creates a high-efficiency feedback loop that:
Maximizes Conversions: Scales winning variants as soon as they show a statistical lead.
Reduces "Regret": Minimizes the "opportunity cost" of sending traffic to underperforming variants during the data-collection phase.
Automates Agility: Removes the need for manual intervention, allowing your experiments to be self-optimizing and revenue-focused from day one.

Decision Logic:
We'll use a 4-hour polling interval with a minimum observation window of 72 hours.
The logic we'll use has a three layers, each acting as a gate before a shift is made:
Layer 1 — Is there enough data? Each variant needs a minimum sample size before its metrics are trustworthy. We'll set a floor, something like 300+ orders per variant, before any reallocation is considered. Below that threshold, the workflow does nothing.
Layer 2 — Is the signal meaningful? We compare variants on revenue per visitor. Rather than reacting to any difference at all, we'll require a meaningful gap — a variant must be outperforming the control by at least 5% (or control must be beating all variants by at least 5% each) to qualify for a traffic boost. This prevents micro-fluctuations from triggering constant reshuffling.
Layer 3 — How much do you shift, and when do you stop? We'll use a stepped reallocation approach rather than slamming all traffic to the winner. A reasonable ladder looks like: start at 50/50, then shift to 60/40, then 70/30, then 80/20. Each step requires the winning variant to maintain its lead across two consecutive check cycles before advancing to the next step. This protects against a variant that looks good in one window but regresses in the next. We'll also set a hard cap — never go above 80/20, so the control always retains enough traffic to remain statistically valid and to detect any reversal.
Plus we'll add in guardrails. If a previously winning variant starts underperforming (drops below the control), the logic should revert the split back toward 50/50 rather than just stalling. This handles novelty effects or external factors like a sale event skewing results temporarily.
Softwares Used:
To build this automation, I'll be using n8n, but you could chose to build this with GitHub Actions, Make, or other softwares.
How to Build a Multi-Bandit Automation:
Step 1: Get Your API Keys
To request access and receive your Intelligems API key(s), contact our support team.
Step 2: Create the Workflow in n8n
Node 1: Schedule Trigger
Type:
Schedule TriggerMode:
Every HoursHours Between Triggers:
4

Node 2: Account Config
Add "Code" node
Select "Code in JavaScript"
Select "Run Once for All Items"
Take the below code, add in your API Keys & paste into n8n:

Node 3: Get All Running Tests
Click "+" after the Schedule node
Search for "HTTP Request"
Configure:
Method: GET
URL:
https://api.intelligems.io/v25-10-beta/experiences-list?category=experiment&status=startedAuthentication: None
Enable Send Headers
Name:
intelligems-access-tokenValue:
{{ $json.apiKey }}
Click "Execute step" to verify it works

Node 4: Flatten the Results
Add "Code" node
Select "Code in JavaScript"
Select "Run Once for All Items"
Update the below code so that
InsertOrgIdmaps to match the Intelligems organization IDs of your clients (you can find these in Intelligems under Settings > General > Organization Settings), thenamevalue is display name for your clients, and theapiKeyvalue is the Intelligems API Key for those clients. Then paste this code into the Code section in n8n.

Node 5: Get Test Analytics Data
Add "HTTP Request" node
Configure:
Method: GET
URL:
https://api.intelligems.io/v25-10-beta/analytics/resource/{{ $json.experienceId }}Authentication: None
Enable Send Headers
Name:
intelligems-access-tokenValue:
{{ $json.apiKey }}
Click "Execute step" to verify it works

Node 6: Check if enough data is gathered
Add "Code" node
Select "Code in JavaScript"
Select "Run Once for All Items"
Update the below code so that the
MIN_ORDERS_PER_VARIATION&MIN_DAYS_RUNNINGvalues match your threshold requirements. Then paste this code into the Code section in n8n.

Node 7: If readyForBandit = True
Add an "If" node
Set the condition as
{{ $json.readyForBandit }}is equal totrue

Node 8: Is Signal Meaningful (Round 1)
For the "true" response only, add "Code" node
Select "Code in JavaScript"
Select "Run Once for All Items"
Copy the below code into n8n:

Note 9: If action = winning or revert
Add an "If" node
Set the condition as
{{ $json.action }}is equal towinningOR{{ $json.action }}is equal torevert

Node 9: Wait 4 hours
For the "true" response only, add "Wait" node
Set Wait Amount to
4Set Wait Unit to
Hours

Node 5: Get Fresh Test Analytics Data
Add "HTTP Request" node
Configure:
Method: GET
URL:
https://api.intelligems.io/v25-10-beta/analytics/resource/{{ $json.experienceId }}Authentication: None
Enable Send Headers
Name:
intelligems-access-tokenValue:
{{ $('Code in JavaScript1').item.json.apiKey }}
Click "Execute step" to verify it works

Note 10: Is Signal Meaningful (Round 2)
Add "Code" node
Select "Code in JavaScript"
Select "Run Once for All Items"
Copy the below code into n8n:

Node 11: If action = winning or revert
Add an "If" node
Set the condition as
{{ $json.action }}is equal towinningOR{{ $json.action }}is equal torevert

Node 12: Get Full Experience Config
For the "true" response only, add a "HTTP Request" node
Configure:
Method: GET
URL:
https://api.intelligems.io/v25-10-beta/analytics/resource/{{ $('Code in JavaScript1').item.json.experienceId }}Authentication: None
Enable Send Headers
Name:
intelligems-access-tokenValue:
{{ $('Code in JavaScript1').item.json.apiKey }}
Click "Execute step" to verify it works

Node 13: Build the Payload
Add "Code" node
Select "Code in JavaScript"
Select "Run Once for All Items"
Copy the below code into n8n:

Node 12: Update the Traffic Allocation
Add "HTTP Request" node
Configure:
Method: PUT
URL:
https://api.intelligems.io/v25-10-beta/experiences/{{ $json.experienceId }}Authentication: None
Enable Send Headers
Name:
intelligems-access-tokenValue:
{{ $('Code in JavaScript1').item.json.apiKey }}Name:
Content-TypeValue:
application/json
Enable Send Body
Content Type:
RawBody:
{{ $json.bodyString }}Be sure to use "Expression" & not "fixed"
Content Type:
application/json
Click "Execute step" to verify it works

You did it!
Now your tests traffic allocations will automatically allocate more traffic to a winning variant.

Bonus: Enhancement ideas
Dynamically use Primary Metrics - Instead of always using Revenue per Visitor, dynamically use whatever is set as the primary metric.
Use Multiple Metrics - Use a combination of Revenue per Visitor, Conversion Rate & others before changing traffic allocation.
Set this up to only run on certain tests instead of all - Don't run the Bandit on every single test. Add a filter in your n8n workflow to only target specific experiments.
Time-Weighted Decay - Using a moving window (e.g., the last 7 days of data) ensures your traffic allocation stays "fresh"
Last updated