Skip to main content

A/B Testing

A/B testing lets you compare different versions of your digital adoption content to determine which variant drives better user outcomes. Run experiments on flows, tooltips, and other content to make data-driven decisions.

What Are Experiments?

An experiment splits your users into groups and shows each group a different content variant. By measuring how each group performs, you can identify which version is more effective before rolling it out to all users.

Common use cases:

  • Test different onboarding flow sequences to maximize completion rates.
  • Compare tooltip copy or placement to improve feature discovery.
  • Evaluate whether a guided tour or a checklist drives better activation.

How Variant Assignment Works

BreakGround uses deterministic hashing to assign users to variants. The system hashes a combination of the userId and experimentId to produce a consistent assignment:

  • The same user always sees the same variant for a given experiment, even across sessions and devices.
  • No server-side state is required to maintain assignment -- the hash is computed at evaluation time.
  • Assignment is evenly distributed across variants according to the configured traffic split percentages.

Control vs. Treatment Groups

Every experiment has at least two variants:

  • Control -- the existing experience (or no content at all). This is your baseline.
  • Treatment -- the new variant you are testing against the control.

You can add more than two variants to test multiple alternatives simultaneously, though two-variant experiments produce the clearest results with the smallest sample size.

Statistical Significance

To draw reliable conclusions, an experiment needs sufficient data:

  • Sample size -- more participants yield more reliable results. Aim for at least several hundred users per variant.
  • Duration -- run experiments long enough to account for day-of-week and usage pattern variation. One to two weeks is a reasonable minimum.
  • Conversion rate difference -- small differences between variants require larger sample sizes to confirm with confidence.

Avoid stopping an experiment as soon as one variant appears ahead. Early results are often misleading due to small sample sizes.

When to Use Experiments

Use experiments when you want to validate a hypothesis before committing to a change. They are most valuable when:

  • You have enough traffic to reach statistical significance within a reasonable timeframe.
  • The metric you are optimizing (completion rate, engagement, time-to-complete) is clearly defined.
  • You can isolate the variable being tested -- changing one thing at a time produces the clearest signal.