Experiment Results
Once an experiment has collected data, the results page provides per-variant metrics to help you determine which variant performed best.
Per-Variant Metrics
The results view displays the following metrics for each variant:
| Metric | Description |
|---|---|
| Participants | Number of unique users assigned to this variant (sampleSize). |
| Conversions | Number of users who completed the target action. |
| Conversion Rate | Percentage of participants who converted (conversions / sampleSize). |
These values correspond to the ComputedResultVariant schema returned by the API, which includes variantId, variantName, sampleSize, conversions, and conversionRate.
Interpreting Results
When reviewing results:
- Compare conversion rates across variants. A higher conversion rate indicates the variant drove more users to complete the target action.
- Check sample sizes -- results from variants with very few participants may not be reliable. Aim for at least several hundred users per variant before drawing conclusions.
- Look at the absolute difference -- a variant with 12% conversion vs. 10% represents a meaningful 20% relative improvement, while 50.1% vs. 50.0% is likely noise.
Confidence and Significance
Statistical confidence helps you distinguish real differences from random variation:
- 95% confidence is the standard threshold. Below this level, the observed difference may be due to chance.
- Larger sample sizes narrow the confidence interval and make results more reliable.
- Longer experiment duration reduces the impact of daily or weekly usage fluctuations.
If results are inconclusive after a reasonable period, consider increasing traffic allocation or extending the experiment duration.
Determining a Winner
A variant is a clear winner when:
- Its conversion rate is meaningfully higher than the other variants.
- The sample size is large enough to rule out random chance.
- The result has been stable over multiple days rather than driven by a single spike.
Once you identify a winner, mark the experiment as Completed and roll out the winning variant to all users by updating your content configuration.
Recommended Sample Sizes
The required sample size depends on the expected difference between variants:
| Expected Improvement | Recommended Per-Variant Sample |
|---|---|
| Large (over 20% relative) | 200-500 users |
| Medium (10-20% relative) | 500-2,000 users |
| Small (under 10% relative) | 2,000-10,000 users |
These are rough guidelines. The smaller the difference you need to detect, the more data you need.
Exporting Results
Export experiment results for further analysis:
- Open the experiment results page.
- Click Export to download variant-level data as CSV or JSON.
- The export includes per-variant participant counts, conversions, and conversion rates alongside experiment metadata (name, status, start and end dates).