Skip to main content

Experiment Results

Once an experiment has collected data, the results page provides per-variant metrics to help you determine which variant performed best.

Per-Variant Metrics

The results view displays the following metrics for each variant:

MetricDescription
ParticipantsNumber of unique users assigned to this variant (sampleSize).
ConversionsNumber of users who completed the target action.
Conversion RatePercentage of participants who converted (conversions / sampleSize).

These values correspond to the ComputedResultVariant schema returned by the API, which includes variantId, variantName, sampleSize, conversions, and conversionRate.

Interpreting Results

When reviewing results:

  1. Compare conversion rates across variants. A higher conversion rate indicates the variant drove more users to complete the target action.
  2. Check sample sizes -- results from variants with very few participants may not be reliable. Aim for at least several hundred users per variant before drawing conclusions.
  3. Look at the absolute difference -- a variant with 12% conversion vs. 10% represents a meaningful 20% relative improvement, while 50.1% vs. 50.0% is likely noise.

Confidence and Significance

Statistical confidence helps you distinguish real differences from random variation:

  • 95% confidence is the standard threshold. Below this level, the observed difference may be due to chance.
  • Larger sample sizes narrow the confidence interval and make results more reliable.
  • Longer experiment duration reduces the impact of daily or weekly usage fluctuations.

If results are inconclusive after a reasonable period, consider increasing traffic allocation or extending the experiment duration.

Determining a Winner

A variant is a clear winner when:

  • Its conversion rate is meaningfully higher than the other variants.
  • The sample size is large enough to rule out random chance.
  • The result has been stable over multiple days rather than driven by a single spike.

Once you identify a winner, mark the experiment as Completed and roll out the winning variant to all users by updating your content configuration.

The required sample size depends on the expected difference between variants:

Expected ImprovementRecommended Per-Variant Sample
Large (over 20% relative)200-500 users
Medium (10-20% relative)500-2,000 users
Small (under 10% relative)2,000-10,000 users

These are rough guidelines. The smaller the difference you need to detect, the more data you need.

Exporting Results

Export experiment results for further analysis:

  1. Open the experiment results page.
  2. Click Export to download variant-level data as CSV or JSON.
  3. The export includes per-variant participant counts, conversions, and conversion rates alongside experiment metadata (name, status, start and end dates).