No results for ""
EXPAND ALL
  • Home
  • API docs

Designing experiments

Read time: 16 minutes
Last edited: Oct 07, 2024
Experimentation is available for all subscription customers

Experimentation is available to all customers on a Developer, Foundation, or Enterprise plan. If you're on an older Pro or Enterprise plan, Experimentation is available as an add-on. To learn more, read about our pricing. To subscribe to a different plan, contact Sales.

Overview

This guide provides strategies and best practices to design an effective experiment in LaunchDarkly.

The changes you make to your applications should be purposeful and provide value to the businesses you support and the people who use your software. But how do you know that the decisions you make are valuable? Experimentation can help you determine this.

It is critical to plan out and document experiments before you run them. Experiment design documents contain the definition of why you are running this test and the decisions you want to make based on its outcomes.

The guide covers the following topics:

  • Types of experiments
  • Generating ideas
  • Formulating a hypothesis
  • Choosing your sample size
  • Determining an audience
  • Choosing variations
  • Setting experiment metrics
  • Creating a roadmap

To learn more about Experimentation, read Experimentation. To learn about Experimentation use cases, read Example experiments.

Prerequisites

To complete this guide, you must have the following prerequisites:

  • A basic understanding of feature flags
  • A basic understanding of your business's needs or key performance indicators (KPIs)

Concepts

It’s not critical to understand these concepts completely, but some awareness of what they are will be helpful. Don’t worry if some of these items are complicated. Experimentation is a scientific discipline that takes time to learn and understand.

You should understand these concepts before you read this guide:

How LaunchDarkly performs experiments

Experiments help you learn and demonstrate what you have learned to others. You can use experiments to gather supplemental data to confirm or refine your ideas. To learn more, read Types of experiments.

In LaunchDarkly, experiments can:

  • validate new ideas by testing multiple variations of a feature,
  • determine your user base's interest in a feature before you build it,
  • gather performance data for a feature, service, or API,
  • increase the adoption of a product by determining the features your customers prefer,
  • drive revenue and conversion rate by rolling out successful variations to the rest of your user base,
  • and more.

You can create an experiment by connecting any flag to a metric or group of metrics that you want to track. Because you can wrap any part of your technology stack or product in a feature flag, you can use experiments to test for much more than the efficacy of user interface (UI) changes.

What hypotheses are and how they relate to experiments

A hypothesis is a theoretical statement that an experiment can prove or disprove. A well-constructed hypothesis has both a positive and negative result defined.

For example, you may hypothesize that changing the button position to the top-right corner will increase the site's click-through rate, and leaving the button in its current position will not cause any changes to the site's click-through rate.

To learn more about writing effective hypotheses, read Formulate a hypothesis.

Types of experiments

Different types of experiments can test different types of hypotheses. LaunchDarkly supports two main types of experiments: feature change experiments and funnel experiments.

Feature change experiments

Feature change experiments let you measure the effect different flag variations have on a metric. You can use feature change experiments to test a wide variety of feature changes.

Here are some examples of feature change experiments:

  • Feature validation: After developing a feature, make sure customer reaction to the feature is positive. If it's not, iterate on the new feature until it's neutral or positive for the target metric.
  • Risk mitigation: Some changes must go out. For example, upgrading to a new version of an operating system on all servers. It's difficult to test all the failure scenarios before doing so, so you can roll these out slowly and watch critical metrics for dips.
  • Optimization: Used in different parts of an app where the app can be parameterized and new test configurations can be selected by an algorithm.

Funnel optimization experiments

A "funnel" is a marketing model that describes a customer's journey through your purchasing or conversion cycle, typically from the awareness stage to the purchasing stage. LaunchDarkly's funnel optimization experiments use multiple metrics to track the performance of each of the steps in your funnel over time.

Funnel experiments use funnel metric groups, which are reusable, ordered lists of metrics you can use to standardize what behavior you're tracking across multiple experiments. To learn more, read Funnel metric groups.

Generate ideas

When you plan your experiment, it can be helpful to identify it as a brief, simple explanation of what you are trying to prove and why. You may also want to explain where the rationale behind this experiment originates. References to any end user research, prior bugs, or feature requests provide context to what you are trying to achieve.

Here are some example questions you can answer with an experiment:

  • Does removing complexity and adding white space result in customers spending more time on the site?
  • Do more product images lead to increased sales?
  • Does page load time increase significantly when search results are sorted?
  • What is the best color for a call to action button?
  • What is the best location on the page for the button?

Remember that you can approach a solution from many perspectives. That means many hypotheses per idea and, potentially, multiple experiments.

Formulate a hypothesis

A hypothesis should be specific and answer a single question. You may need to run multiple experiments to answer numerous questions about a feature.

Here are two example hypotheses:

  • “We believe that by rewriting our results page in React, we will reduce page load time and increase utilization.”
  • “We believe that by adding a chatbot to the second page of our account registration process, we will decrease incomplete signups and increase total signup completion.”

Your hypothesis must be as robust and specific as possible to get useful data from your experiment. An imprecise hypothesis can allow more subjective interpretation of the results.

A good hypothesis has the following considerations:

  • Specific: The more specific you are in your expectations, the easier it is to determine whether you have a real effect and what you need to do next.
  • Rigorous: The hypothesis must have solid metrics to work toward, and you should review them regularly.
  • Multiplicative: A great hypothesis lets you generate further hypotheses. You should be able to build your next hypothesis from your results from the current one. It should generate value no matter what the results of the experiment.
Use this template to write useful hypotheses

Most hypotheses can be reduced to the following statement:

"By doing [X] to [Y], we expect [Z]."

Choose a sample size

The number of end users included in an experiment is called the sample size. The larger an experiment’s sample size, the more confident you can be in the experiment’s outcome. However, one benefit of using Bayesian statistics for our experimentation model is you can still get useful results even with small sample sizes. To learn more, read Experimentation and Bayesian statistics.

While you assess an experiment's viability, consider how many unique visitors it takes to get a representative sample of your audience. Sample size estimators can tell you things like how many visitors you need, how long the experiment should run for, an estimated impact of the experiment, and other information you need. To calculate these numbers, you can use LaunchDarkly's sample size calculator.

When you use the sample size calculator, you can choose to calculate either:

  • the effect to sample size and duration, or
  • the sample size and duration to effect.

Expand the sections below to learn how to use each option.

Expand Effect to sample size and duration

Use this method to calculate the number of contexts, or sample size, you should include in an experiment when you know how big of an effect you want the treatment variation to have on your results.

To calculate the sample size, you need to know:

  • the Relative effect or Absolute effect you want the experiment to have:
    • Relative effect: the minimum difference in the metric between the control variation and the treatment variation, as a percentage
    • Absolute effect: the minimum difference in the metric between the control variation and the treatment variation
  • the Exposure rate of your experiment: the number of contexts you expect to be exposed to the experiment per day
  • the Control mean: the average of the metric in the control variation
  • the Control standard deviation: how much individual measurements of the metric in the control variation would differ from the control mean

You do not need to use or understand all of the advanced settings in order to use the calculator. However, if you are familiar with statistical concepts and want to calculate your sample size at a more granular level, you can specify several variables.

Other fields in the advanced setting section include:

  • Alternative hypothesis: the type of alternative for the hypothesis test. "Two-sided" tests whether the treatment is different from the control, "upper-tailed" tests whether the treatment is better than the control, and "lower-tailed" tests whether the treatment is worse than the control.
  • Significance level: the probability of detecting an effect when there is none, also called the false positive rate.
  • Power: the probability of correctly detecting an effect when an effect is truly present. The power is equal to 1 minus the false negative rate.
  • Treatment proportion: the proportion of traffic allocated to the treatment variation. The calculator assumes only two experiment variations: the control and treatment.
  • Standard deviation ratio: the ratio of the treatment variation standard deviation to the control variation standard deviation.
Expand Sample size and duration to effect

Use this method to calculate the estimated minimum detectable effect for an experiment when you know approximately how many contexts will be included in the experiment. The minimum detectable effect is an estimate of the smallest difference that the experiment will be able to detect between the control variation and a treatment variation in an experiment.

To calculate the minimum detectable effect, you need to know:

  • Exposure rate: the number of contexts you expect to be exposed to the experiment per day
  • Duration or Sample size:
    • Duration: the number of days you plan to run the experiment
    • Sample size: the total number of contexts you plan to include in the experiment
  • the Control mean: the average of the metric in the control variation
  • the Control standard deviation: how much individual measurements of the metric in the control variation would differ from the control mean

You do not need to use or understand all of the advanced settings in order to use the calculator. However, if you are familiar with statistical concepts and want to calculate your sample size at a more granular level, you can specify several variables.

Other fields in the advanced setting section include:

  • Alternative hypothesis: the type of alternative for the hypothesis test. "Two-sided" tests whether the treatment is different from the control, "upper-tailed" tests whether the treatment is better than the control, and "lower-tailed" tests whether the treatment is worse than the control.
  • Significance level: the probability of detecting an effect when there is none, also called the false positive rate.
  • Power: the probability of correctly detecting an effect when an effect is truly present. The power is equal to 1 minus the false negative rate.
  • Treatment proportion: the proportion of traffic allocated to the treatment variation. The calculator assumes only two experiment variations: the control and treatment.
  • Standard deviation ratio: the ratio of the treatment variation standard deviation to the control variation standard deviation.

After you have calculated your sample size, you can run your experiment as long as you need until the experiment includes the desired number of participants. We recommend running experiments for at least one week, if possible. For example, if your experiment needs to include 70,000 customers, it’s better to run an experiment that includes 10,000 customers per day for seven days, rather than an experiment that includes all 70,000 on the first day. This helps avoid “day of week” effects. To learn more about day of week effects and how they can affect your experiment, read Carryover bias and variation reassignment.

Experimentation keys

As you decide on a sample size, you may want to consider the number of Experimentation keys you have available in the LaunchDarkly plan you subscribe to. For example, if you have 50,000 Experimentation keys per month included in your plan, and you run ten experiments per month, you may want to limit your experiment audiences to no more than 5,000 keys each.

Experimentation keys include the total number of unique context keys, from server-side, client-side, and edge SDKs, included in each experiment:

  • if the same context key is in one experiment multiple times, LaunchDarkly counts it as one Experimentation key
  • if the same context key is in two different experiments, LaunchDarkly counts it as two Experimentation keys

Determine an audience

An experiment requires two or more samples to test against. You may want to run an experiment on all of your customers, or you may want to target customers based on certain attributes so you can run the experiment on a smaller sample of your population. For example, you may want to run your experiment only on customers in the United States, or only those that have an account with your business.

Here are some possible ways to construct your experiment sample:

  • Logged in as opposed to anonymous
  • By company
  • By geography
  • Randomly

If you want to restrict your experiment audience to only customers with certain attributes, create a targeting rule on the flag you include in the experiment and run the experiment on that rule. If you don't want to restrict the audience for your experiment, run the experiment on the flag's default rule. To learn more, read Allocating experiment audiences.

An experiment also requires determining who or what will be randomly assigned to each variation that you're testing. For example, if you are testing an updated flow for processing purchase orders, you may want everyone in the same organization assigned to the same variation. If you are testing performance of a backend change, you may want each new request to your servers assigned to the same variation. When you build the experiment, you can choose which context kind to use as your randomization unit. To learn more, read Randomization units.

Mutually exclusive experiments

You may also want to restrict your audience from being in more than one experiment at a time, or from being in two closely-related experiments at once. You can accomplish this by using layers to create mutually exclusive experiments. To learn more, read Mutually exclusive experiments.

Holdouts

Holdouts let you hold back a certain percentage of your audience from any running experiments. This lets you see the overall effect of your experiments on your customer base, and lets you answer questions about how effective the experiments you're running are. If you are running a holdout, you will need to decide whether your new experiment should be included in the holdout or not. To learn more, read Holdouts.

Choose variations

Before building the experiment, you need to decide how many variations you want to test, and what those variations are. For example, will you be testing a red checkout button versus a blue checkout button, or will you be testing a red versus a blue versus a green checkout button?

Then, you need to add the variations you choose to the flag you are including in your experiment. To learn more, read Creating flag variations.

Metrics and flag variations

Metrics should be applicable to all flag variations within an experiment

An experiment can only calculate a flag variation's probability to be best if the primary metric can measure something within the variation. To learn more, read Analyzing experiments.

A common goal of an experiment is finding out which flag variation performs better among a set of two or more options.

If you want to learn which flag variation performs better, it must be possible for that metric to measure something in all of the flag variations within the experiment. For example, imagine you have a 1-click checkout process and a 2-click checkout process. The 2-click checkout process requires the use of a "Continue" button. For customers that use the 2-click checkout process, you want to find out if a red or blue "Continue" button results in more completed checkouts.

This means your flag would have three possible variations:

  • 1-click checkout
  • 2 click checkout with red button
  • 2 click checkout with blue button

The experiment will use a custom conversion count metric that tracks total clicks of the "Continue" button. Including customers that see the 1-click checkout process would skew your results, because those customers will never see either the red or the blue "Continue" button. Instead, you should exclude anyone that gets the "1-click checkout" variation from your experiment, so that the metric is applicable to all variations within the experiment.

Set experiment metrics

A good experiment requires well-defined metrics. You must determine what kind and how many metrics to include in your experiment.

Decide which metrics to use

Identifying the right metrics is imperative to getting accurate results from your experiments.

Choosing metrics that correctly measure the effect of a change on your customers or codebase can be difficult. Where possible, choose metrics that are a direct result of the changes you are making, rather than those that might be influenced by other factors. For example, if you know your business' key performance indicators (KPIs), you may be able to break them down into smaller numeric goals, such as an item's average revenue per order, a server's response speed, or a link's click-through rate. These goals might make useful metrics to track in an experiment.

Metric types

LaunchDarkly supports the following metric types:

  • Clicked or tapped conversion metrics: Tracks the clicks on a UI element. For example, how frequently a customer clicks the Save button. Only compatible with JavaScript-based SDKs.
  • Custom conversion binary metrics: Tracks events for any arbitrary event. For example, whether or not a customer search called a service.
  • Custom conversion count metrics: Counts events for any arbitrary event. For example, how many times a customer search called a service.
  • Custom numeric metrics: Tracks increases or decreases in numeric value against a baseline you set. For example, how many items are in a customer's cart when they check out of your online store.
  • Page viewed conversion metrics: Tracks how many times a page is viewed. For example, how many times a blog post is viewed based on three different titles. Only compatible with JavaScript-based SDKs.

To learn more, read Choose a metric type.

Choose how many and what kind of metrics to use

With feature change experiments, you will choose one primary metric to base your decision making on. With each additional metric you add to an experiment, decision making becomes harder, which is why you can choose only one primary metric. Your primary metric can be an individual metric you attach to the experiment, or part of a metric group. To learn more, read Primary and secondary metrics.

With funnel optimization experiments, you will choose a funnel metric group to base your decision making on. The order of metrics within funnel metric groups affects how LaunchDarkly analyzes the metrics. When you create a funnel metric group, each metric should measure a required step in the user journey. Users should not be able to skip any steps the funnel group is measuring, or take any steps out of order. Doing so will skew your experiment results. To learn more, read Metric groups.

Create a roadmap

There are many tools to help you manage an experiment's roadmap. Jira, Trello, Asana, Excel, and Google Sheets are all good candidates for storing and logging information on experiments.

Your roadmap should contain the following:

  • Experiment name.
  • Experiment description.
  • Experiment hypothesis.
  • Sample size: how much traffic you need before you can check your results and determine an outcome.
  • Audience: the target population for your experiment and the logic you use to identify it.
  • Variations: how many flag variations this experiment uses, and what percentage of traffic each is assigned.
  • Metrics and metric groups: which metric types you are using and why.
  • Location: the project and environment within LaunchDarkly where your experiment will run.
  • Layers: whether the experiment should be part of a set of mutually exclusive experiments.
  • Holdouts: whether to include the experiment in a running holdout.
  • The date your experiment will start.
  • How long your experiment will run.
  • Status: whether your experiment is still being drafted, is running, is being analyzed after collecting data, or is complete.
  • Priority in comparison to other experiments and projects.

Build and run an experiment

After your have designed your experiment, it’s time to build and run it. To learn how, read Creating experiments.

After the experiment completes

One of the advantages of feature-flag-based experimentation is that your engineering team can immediately roll out the winning variation to the appropriate audiences. The winning variation for a completed experiment is the variation that is most likely to be the best option out of all of the variations you tested. To learn more, read Analyzing experiments.

If you’re not ready to roll out the winning variation yet, you can change the flag variations in an experiment in order to measure different data. Iterations of an experiment build on the value you have generated and allow you to pivot and investigate further.

Conclusion

In this guide you have learned some of the key concepts of experimentation and how to design experiments using feature flags.

We recommend these external resources to learn more about building experiments:

Want to know more? Start a trial.

Your 14-day trial begins as soon as you sign up. Get started in minutes using the in-app Quickstart. You'll discover how easy it is to release, monitor, and optimize your software.

Want to try it out? Start a trial.