Skip to main content

Experiment Builder

The experiment builder walks you through creating, configuring, and launching an A/B test. Each experiment defines which content variants to test, how traffic is split, and which audience to target.

Creating an Experiment

To create a new experiment:

  1. Navigate to Experiments in the dashboard sidebar.
  2. Click Create Experiment.
  3. Fill in the required fields:
    • Name -- a descriptive label for the experiment (e.g., "Onboarding v2 vs. Checklist").
    • Hypothesis -- what you expect to learn (e.g., "A shorter onboarding flow will increase completion rate").
    • Content -- select the flow or content item you want to test.
    • Audience -- optionally restrict the experiment to a specific audience segment.

Configuring Variants

Each experiment requires at least two variants. For every variant, configure:

  • Name -- a label such as "Control" or "Treatment A".
  • Description -- optional notes about what this variant changes.
  • Flow / content assignment -- the specific flow or content configuration shown to users in this variant.
  • Traffic split percentage -- the share of eligible users assigned to this variant.

All variant split percentages must sum to exactly 100%. A typical two-variant setup uses a 50/50 split, but you can allocate less traffic to riskier variants (e.g., 80/20).

Experiment Statuses

An experiment moves through the following lifecycle:

StatusDescription
DRAFTInitial state. Configure variants and targeting before starting.
RUNNINGActively assigning users and collecting data.
PAUSEDTemporarily halted. Existing assignments are preserved but no new users are assigned.
COMPLETEDThe experiment has concluded. Results are final and available for review.
ARCHIVEDRemoved from active views. Data is retained for historical reference.

Managing Experiment Lifecycle

Starting -- from the DRAFT state, click Start Experiment to begin assigning users. Ensure your variants and audience are fully configured before starting.

Pausing -- pause a running experiment if you need to investigate unexpected results or make adjustments. Users already assigned retain their variant.

Completing -- mark an experiment as completed once you have collected enough data to make a decision. This freezes the results and prevents new assignments.

Archiving -- archive completed experiments to keep your experiment list focused on active work.

Best Practices

  • Define your success metric and minimum sample size before starting.
  • Avoid editing variant configurations while an experiment is running -- this invalidates collected data.
  • Run experiments for at least one to two weeks to account for usage pattern variation.
  • Use audience targeting to exclude internal users or test accounts from experiment results.