No, you can’t accurately attribute, nor forecast, revenue to experimentation. Here’s why.

  1. No, you can’t accurately attribute, nor forecast, revenue to experimentation. Here’s why.
  2. Yes. We need to attribute revenue to experimentation. Here’s how: Part 1
  3. Yes. We need to attribute revenue to experimentation. Here’s how: Part 2
  • why you cannot accurately forecast that revenue attribution
  • and why you cannot collectively attribute from multiple experiments

You shouldn’t, really, attribute revenue to individual experiments

…but we have to. So we have to deal with it. Businesses are run on performance metrics and experimentation is a method of validating performance. Here are some reasons why we shouldn’t attribute revenue to experiments, and why the accuracy of said attribution can throw you under a bus.

# It takes longer, so ask what is the lost opportunity cost

The purpose of experimentation is to prove or disprove a hypothesis. It’s designed to validate. It is not designed to measure the difference of said validation.

# The purpose of experimentation is not about revenue gain

At least, not all experiments, certainly. Surely not every decision or improvement we make is designed to make a direct positive impact on revenue? We worked with retailers who needed to appease brands for merchandising or aesthetic looks. We’ve created experiments to impact post-purchase customer satisfaction. We’ve ran tests to prove the CEO’s change in navigation right (or wrong); not a tactic I endorse or agree with by the way.

Forecasting experiment attribution is even harder

# Experiment results are based on a series of averages.

# Experiment results are a snapshot of time and behaviour

An experiment is a snapshot of time; the results of which are of a specific sample, who reacted in a specific way, providing a specific result.

Collective experiment attribution is nigh impossible

Then there’s the issue of, not just forecasting revenue uplift from a single experiment, but forecasting it from multiple experiments.

Bonus: Which leads me to the discussion of the term ‘accurate’…

When I posted the poll up for others to see, the majority of comments were along the lines of this.

Summary

Revenue attribution from individual experiments is difficult and fuzzy and, in an ideal world, shouldn’t be done.

  • Experiments are designed as a snapshot in time
  • To that end, appreciate that if we need a high degree of confidence, as revenue is a distribution, experiments will take longer to run, perhaps reducing your cadence
  • And thus, what is the lost opportunity cost of doing so
  • Not focus on revenue attribution as much as “what the purpose of experimentation is for your business”. Different maturities will yield different approaches to revenue attribution
  1. No, you can’t accurately attribute, nor forecast, revenue to experimentation. Here’s why.
  2. Yes. We need to attribute revenue to experimentation. Here’s how: Part 1
  3. Yes. We need to attribute revenue to experimentation. Here’s how: Part 2

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store