Intelligems Docs
  • Welcome to Intelligems
  • Getting Started
    • Getting Started
    • Adding Intelligems Script to your Theme
    • Updating the Intelligems Script
    • Common Use Cases
      • Price Test Common Use Cases
        • The Straddle
        • The Double Down
        • The Strikethrough
        • The Great Discount Debate
        • Savings Showdown: Volume Discount vs. Price Discount
      • Shipping Test Common Use Cases
        • The Flat Fee Face Off
        • The Threshold Trials
      • Content Test Common Use Cases
        • Landing Page Testing
        • Testing a Brand New Theme
        • Testing Different Imagery
        • Testing Cart Elements
        • Testing Announcement Bar Text
        • Navigation Menu
        • Testing Checkout Blocks
      • Offer Test Common Use Cases
        • The Volume Discount Duel
        • Gifting Games
    • Best Practices
      • 🧪Test Design Best Practices
      • ✅Best Practices During a Test
    • General FAQs
  • Price Testing
    • Price Testing - Getting Started
    • Price Testing Integration Guides
      • Integration Guide using Shopify Functions
        • Step 1: Add Intelligems JavaScript
        • Step 2: Tag product prices
        • Step 3: Update your cart
        • Step 4: QA your integration, and publish your changes
      • Integration Guide using Checkout Scripts
        • Step 1: Add Intelligems JavaScript
        • Step 2: Tag product prices
        • Step 3: Add the Checkout Script
        • Step 4: Update your cart
        • Step 5: QA your integration, and publish your changes
      • Integration Guide using Duplicate Products
        • Step 1: Add Intelligems JavaScript
        • Step 2: Tag product prices
        • Step 3: Hide duplicate products from collections pages
        • Step 4: Configure duplicate products
        • Step 5: QA your integration, and publish your changes
      • Troubleshooting
        • How to Add the data-product-id and/or data-variant-id Attribute to an Element
      • Replo Page Builder
    • How to Set Up a Price Test
    • Price Test QA Checklist
    • Starting a Price Test
    • Ending a Price Test
    • Testing Prices with Subscriptions
      • Testing Prices with Recharge 2.0 or Stay.Ai
      • How to Set Up a Price Test using Duplicate Products and Recharge Subscriptions
      • How to Set Up a Price Test using Duplicate Products and Skio Subscriptions
      • Managing Duplicate Products when Redirecting to Duplicate PDPs
    • Multi-Currency Testing
    • Price Testing FAQs
  • Shipping Testing
    • Shipping Testing - Getting Started
    • How to Set Up a Shipping Test
    • Shipping Test QA Checklist
    • Starting a Shipping Test
    • Ending a Shipping Test
    • Shipping Progress Bar Integration
    • Shipping Testing FAQs
  • Content Testing
    • Content Testing - Getting Started
      • How to Set Up a Split URL Test
      • How to Set Up an Onsite Edits Test
      • How to Set Up a Template Test
      • How to Set Up a Theme Test
      • How to Set Up a Test using our JavaScript API
    • Content Test QA Checklist
    • Ending a Theme Test
    • Content Testing FAQs
  • Personalizations
    • Personalizations - Getting Started
    • Personalization Modifications
      • Offer Modifications
      • Progress Bars
      • Offers: Integrating Widgets
      • Offers: Running a Large Number of Offer Personalizations with Shopify Functions
      • Theme Personalization Precautions
    • Targeting Personalizations
    • Targeting Modes for Personalizations
    • Previewing Personalizations
    • Testing Offer Personalizations
    • Offers Limits
    • Offer Combinations
    • Scheduling Personalizations
    • Rolling Out Tests
    • Personalizations FAQs
  • General Features
    • Targeting
      • Audience Targeting
      • Currency Targeting
      • Page Targeting
      • Mutually Exclusive Experiments
      • Targeting FAQs
    • Onsite Editor
    • Image Onsite Editor
    • CSS and JavaScript Injection
  • Analytics
    • Overview
      • How orders and sessions are attributed to experiments
      • Order and revenue accounting
      • How experiment targeting affects analytics
    • Analytics FAQs
    • Metric Definitions
      • Revenue
      • Conversion Funnel
      • Profit
      • Subscriptions
    • Filters
    • Statistical Significance
    • Timeseries
    • Custom Events
      • Overview
      • CSS Selectors
      • Scoping to specific pages
      • Custom Javascript Events
      • Testing Custom Events
      • Using custom events in experiment analytics
    • How to Add Profit to Intelligems Analytics
    • How to Add Product Groups to Intelligems Analytics
    • Tagging Orders by Test Group in Shopify
    • Exporting Data
    • Data Sharing
  • Performance Optimization
    • Site Performance
    • Optimizing Your Price-Test Integration
    • Anti-Flicker Modes
    • Edgemesh
  • Integrations
    • Google Analytics 4 Integration
    • Amplitude Integration
    • Heap Integration
    • Segment Integration
    • Heatmap Integrations
      • Integrating with Microsoft Clarity
      • Integrating with Heatmap.com
      • Integrating with HotJar
    • Navidium Testing
  • Developer Resources
    • Javascript API
      • User Object
      • Price Object
      • Campaigns Object
        • campaigns.getAll()
        • campaigns.getGWP(options)
        • campaigns.setHistoryStatus(params)
    • Intelligems Theme Snippets
    • Advanced Settings
    • Cart Permalinks
    • Targeting By Customer Parameters
    • Custom Add to Cart and Order Completed Events
Powered by GitBook
On this page
  • Create a Testing Roadmap
  • Determining Traffic Allocations Between Groups
  • Determining the Magnitude of Changes to Test
  • Only Make One Change

Was this helpful?

  1. Getting Started
  2. Best Practices

Test Design Best Practices

We want you to get the most from the tests you run with Intelligems! See below for our suggested best practices for designing solid tests.

PreviousBest PracticesNextBest Practices During a Test

Last updated 5 months ago

Was this helpful?

Create a Testing Roadmap

We recommend creating a testing roadmap as you complete onboarding with Intelligems. Major factors to consider when creating a test roadmap include:

  • Define Clear Objectives: Before launching an A/B test, identify what you aim to achieve. Whether it's increasing the conversion rate, reducing cart abandonment, or improving click-through rates on product pages, having a clear objective helps in designing the test and measuring its success accurately. A few examples:

    • For businesses with lower margins, an increase in revenue can have a dramatic effect on bottom line.

    • Similarly, for businesses with higher margins, a small change in conversion can significantly boost total profit.

    • Businesses that are early on in their journey may be more focused on increasing conversion rates right away.

  • Intuition or Customer Feedback: Combine your business objectives with your intuition or customer feedback and use Intelligems tests to confirm and quantify potential changes to your merchandising strategy.

  • Get Creative: Many test types are possible with Intelligems.

  • Test Timing: Plan for each test to take about 3 to 4 weeks. We recommend running each test for at least one week to capture any changes in customer behavior related to day of the week. We also recommend running each test for no longer than five weeks due to increased risk of device switching / cache clearing.

    • We typically recommend running a test until each group has 200-300 orders to start to see significant results. Use this rule of thumb as a way to estimate the amount of time required to run each test.

  • Test Frequently: Market conditions and consumer behavior change frequently. Test new hypotheses and run experiments frequently to make sure you're maximizing your store's potential at all times.

  • Determine Stat Sig before starting the test: you should focus on the probability that indicates one of your test groups outperforms the control. While we suggest reaching 300 orders per test group, actual requirements can vary depending on the nature of the change and the desired confidence threshold. For example, a higher confidence threshold will require more data than a lower threshold. More importantly, larger changes typically yield results more quickly than subtle adjustments. If you're testing a change that's a small risk to implement and easy to revert if needed, you may be comfortable with a lower threshold (e.g. 85%). If you're resting something more important and riskier, you may prefer a higher threshold (e.g. 90%).

  • Before launching tests, establish clear thresholds for statistical significance, orders, visitors, and duration. Have a hypothesis and, upon concluding the test, assess whether results align with your expectations. If a test doesn't achieve significance within the expected timeframe, use the available data to make an informed decision, considering the change's risk and reversibility. You can read more about Stat Sig .


Determining Traffic Allocations Between Groups

We generally recommend allocating traffic evenly between groups, other than in certain circumstances such as:

  • You have already decided to change prices and want to hold back a small amount of traffic on prior prices

  • You want to allocate most traffic to the control because of customer support concerns

  • You have decided to remove allocation of new traffic to certain test groups part-way through the test

    • Note: Changing the allocation of test traffic during a test may result in skewed data. If you remove traffic from one group, we recommend scaling the other groups proportionally to their prior allocations


Determining the Magnitude of Changes to Test

We recommend starting with broad changes and using results to narrow in on more refined tests. For instance, if you're testing:

  • Prices: start with an 8-10% increase and decrease, simultaneously

    • If traffic allows, testing 3-4 groups simultaneously will yield more insightful data and will give you a sense of your products' elasticities

    • If conditions only allow you to test in one direction (i.e. decreasing prices isn't an option for your business), pick a few points in the direction you wish to test

    • Consider factors such as seasonality: for example, if you sell winter and summer products, running a site-wide price test may not make much sense, since results for those categories are probably very different at any given point in the year. You may focus on running the test over products that will yield more meaningful results.

    • Are products substitutable? If testing individual products, avoid changing the price of one product that could affect the results of a different product.

  • Shipping Rates / Thresholds: start with an 8-10% increase / decrease, simultaneously


Only Make One Change

  • We recommend keeping the existing prices and configurations for the "control group" for the experiment to keep your baseline consistent.

  • If there are unique aspects to how you merchandise your products (e.g. if you offer bundle discounts, welcome offers, or subscriptions), consider how these elements of your store might be affected by tests you want to run. In general, we recommend keeping these types of discounts as consistent as possible across groups so these variables do not create noise in the test.

  • If you are running a content test, be sure to keep the change limited to one thing. If you change multiple elements in a single test, it will be difficult to decipher what caused any performance changes.

🧪
here