Experiment Builder
The experiment builder walks you through creating, configuring, and launching an A/B test. Each experiment defines which content variants to test, how traffic is split, and which audience to target.
Creating an Experiment
To create a new experiment:
- Navigate to Experiments in the dashboard sidebar.
- Click Create Experiment.
- Fill in the required fields:
- Name -- a descriptive label for the experiment (e.g., "Onboarding v2 vs. Checklist").
- Hypothesis -- what you expect to learn (e.g., "A shorter onboarding flow will increase completion rate").
- Content -- select the flow or content item you want to test.
- Audience -- optionally restrict the experiment to a specific audience segment.
Configuring Variants
Each experiment requires at least two variants. For every variant, configure:
- Name -- a label such as "Control" or "Treatment A".
- Description -- optional notes about what this variant changes.
- Flow / content assignment -- the specific flow or content configuration shown to users in this variant.
- Traffic split percentage -- the share of eligible users assigned to this variant.
All variant split percentages must sum to exactly 100%. A typical two-variant setup uses a 50/50 split, but you can allocate less traffic to riskier variants (e.g., 80/20).
Experiment Statuses
An experiment moves through the following lifecycle:
| Status | Description |
|---|---|
DRAFT | Initial state. Configure variants and targeting before starting. |
RUNNING | Actively assigning users and collecting data. |
PAUSED | Temporarily halted. Existing assignments are preserved but no new users are assigned. |
COMPLETED | The experiment has concluded. Results are final and available for review. |
ARCHIVED | Removed from active views. Data is retained for historical reference. |
Managing Experiment Lifecycle
Starting -- from the DRAFT state, click Start Experiment to begin assigning users. Ensure your variants and audience are fully configured before starting.
Pausing -- pause a running experiment if you need to investigate unexpected results or make adjustments. Users already assigned retain their variant.
Completing -- mark an experiment as completed once you have collected enough data to make a decision. This freezes the results and prevents new assignments.
Archiving -- archive completed experiments to keep your experiment list focused on active work.
Best Practices
- Define your success metric and minimum sample size before starting.
- Avoid editing variant configurations while an experiment is running -- this invalidates collected data.
- Run experiments for at least one to two weeks to account for usage pattern variation.
- Use audience targeting to exclude internal users or test accounts from experiment results.