Test Design Best Practices

We want you to get the most impact possible out of the tests you run with Intelligems! See below for our suggested best practices for designing solid tests.

Creating a testing roadmap

We recommend creating a testing roadmap as you complete onboarding with Intelligems. An example test roadmap can be found below. Major factors to consider when creating a test roadmap include:

  • Business Objectives. What are your major business objectives? Aligning on one or two metrics that are most important (i.e. conversion, revenue, profit, margin) will dictate how you design your tests.

    • For businesses with lower margins, an increase in revenue can have a dramatic impact on bottom line.

    • Similarly, for businesses with higher margins, a small change in conversion can dramatically boost total profit.

  • Intuition or Customer Feedback. Combine your business objectives with your intuition or customer feedback and use Intelligems tests to confirm and quantify potential changes to your merchandising strategy.

  • Get creative! A lot of different test types are possible with Intelligems.

  • Test Timing. Plan for each test to take ~3-4 weeks. We recommend running each test for at least one week to capture any changes in customer behavior related to day of the week. We also recommend running each test for no longer than five weeks due to increased risk of device switching / cache clearing.

    • We typically recommend running a test until each group has 200-300 orders to start to see significant results. Use this rule of thumb as a way to estimate the amount of time required to run each test.

  • Test Frequently! Market conditions and consumer behavior change frequently. Test new hypotheses and run experiments frequently to make sure you're maximizing your store's potential at all times.


Determining traffic allocations between groups

We generally recommend allocating traffic evenly between groups, other than in certain circumstances such as:

  • You have already decided to change prices and want to hold back a small amount of traffic on prior prices

  • You want to allocate most traffic to the control because of customer support concerns

  • You have decided to remove allocation of new traffic to certain test groups part-way through the test

    • Note: Changing the allocation of test traffic during a test may result in skewed data. If you remove traffic from one group, we recommend scaling the other groups proportionally to their prior allocations


Determining the magnitude of changes to test

We recommend starting with broad changes and using results to narrow in on more refined tests. For instance, if you're testing:

  • Prices: start with an 8-10% increase and decrease, simultaneously

    • If traffic allows, testing 3-4 groups simultaneously will yield more insightful data and will give you a sense of your products' elasticities

    • If conditions only allow you to test in one direction (i.e. decreasing prices isn't an option for your business), pick a few points in the direction you wish to test

  • Shipping rates / thresholds: start with an 8-10% increase / decrease (rationale similar to above)


Other considerations

  • We recommend keeping the existing prices and configurations for the "control group" for the experiment to keep your baseline consistent

  • If there are unique aspects to how you merchandise your products (i.e. if you offer bundle discounts, welcome offers, or subscriptions), consider how these aspects of your store might be impacted by tests you want to run. In general, we recommend keeping these types of discounts as consistent as possible across groups so these variables do not create noise in the test

Last updated