Yes. We need to attribute revenue to experimentation. Here’s how: Part 1

David Mannheim
9 min readMay 21, 2021

TLDR: Communication is key. Align and define on your purpose of experimentation for the business. Set your standards and therefore expectations. Define what accuracy is acceptable. And tell stories.

Woah, this is a long one. Sorry. So much so it’s a two-parter. I actually think the second part is better than the first part, too; this is Joss Whedon — all about communication — and next Friday is Zach Synder — the Batman.

This is part of a 4-article series on whether, and how, you can attribute revenue to experimentation:

  1. Why do we assume experimentation is about financial gain?
  2. No, you can’t accurately attribute, nor forecast, revenue to experimentation. Here’s why.
  3. Yes. We need to attribute revenue to experimentation. Here’s how: Part 1
  4. Yes. We need to attribute revenue to experimentation. Here’s how: Part 2

Intro

There’s a stigma that experimentation is inextricably linked to revenue. I know, I’m banging my drum again.

It’s because I have to. And it has to be. In a world where cash is king and competition is high, methods that are advertised as pushing us forward, or in some cases silver bullets, will win. Unfortunately, sometimes that advertisement is falsified. Poor statistics. Case studies that promise the world. Copy-cat behaviour. I unpack that more in the previous two articles of this attribution series.

I continually refer back to the true purpose of experimentation; validation.

The one thing we haven’t discussed is how to attribute revenue to experimentation.

Because my world comes from stakeholder expectations and operations within a conversion rate optimisation environment, I cannot sit here and discuss statistics as articulate as some of my data scientist colleagues. Attributing and forecasting revenue through data modelling, be it predictive or not, is complex and contextual; it’s not something I’m going to be able to dig into.

But I do understand humans; despite being an introvert.

“CRO is not a pure science. We’re dealing with things that are just messy. We’re testing with humans and humans are complex.” Matt Edgar

The purpose of experimentation is not designed for revenue. And to do so is to over-simplify the performance of a method that, inherently, is complex. Yet we have to discuss the elephant in the room with our stakeholders. So we use the power of communication to educate them and create a relationship of authenticity and trust that builds a balance of validating decisions practically; rather than creating features commercially.

As a result, when we communicate experimentation to others within our business, my recommendations for how to educate and reframe mindsets are (at least for this article) — part 2 next Friday.

  1. Align and define on your purpose of experimentation
  2. Set Standards and therefore expectations
  3. Define what accuracy is acceptable
  4. Tell Stories

Align and define on your purpose of experimentation

Different businesses have a different purpose for experimentation.

Understandably, AB testing is designed for validation, but from experience I’ve had the following quotes, from stakeholders of various businesses, when asked the question “why do you want to test?”. It’s actually a really powerful question to ask your stakeholders — and one we ask within our on-boarding quiz at FKA User Conversion. Some responses have included:

  • “I want to know what works above and beyond what’s already in our roadmap”
  • “We can add more £££ to our service and know what works and what doesn’t work”
  • “Are we doing the right things? What should we be focussing on?”

André Morys and the team at Konversions Kraft set those expectations clearly by asking each client, before starting a project, “what their most favourite outcome is and what ‘price tag’ they would put on each of the outcomes to reach their goals”. This helps prioritise and frame the conversation.

Similarly, as part of our on-boarding process at FKA User Conversion, we send a quiz out to all stakeholders involved or have a touchpoint in the project and it’s been one of the most influential and enlightening things that we do as an agency. There are a series of questions within the quiz that identify:

  1. How important is experimentation to their roles and responsibilities
  2. How involved they want to be within experimentation*

*I really like understanding the difference between importance and involvement. There’s a venn diagram there, somewhere.

That quiz is here for reference.

Set Standards and therefore expectations

At the end of the day, we’re just talking about good levels of communication, aren’t we? Therefore, we should follow the unwritten rules of decent conversations with human beings who are complex and emotional. Setting those expectations first is vital to success so you don’t get the shock factor later on in your career of “woah, hold on, you said this was worth £5m but we’re not seeing it on our bottom line — why?”.

“The most important tip is that you need to sit down with your stakeholders and talk about this early on. About how you want to report it, what the downsides are of reporting it like that — they should be aware what the limitations are.” Annemarie Klaasen

The agreed approaches can be individual to you, the business or both. They enable you to set standards with how you are working within the business itself. Whether it comes down to the nuance of individual experiment reporting, collective experiment reporting or the purpose of experimentation as a concept, those standards should be both set and maintained.

We are dealing with human behaviour and different individuals react and learn differently; find a combination of methods that your stakeholders react to (ooo, look at that, a test and learn approach) and set those expectations early on.

Define what accuracy is acceptable

Defining levels of accuracy is vital to the success of your program and communication.

When I originally put out the poll out of ‘do you believe you can accurately attribute and forecast revenue to AB testing’ there was a clear split. But the word “accurately” was what most were hung up on and caught people off guard. Rightly so. Accurately is a range of confidence — so how accurately do we need to be?

“If we were trying to get a 121 perfect match attribution, then 1. That’s a lot of time and time is money you’ll spend to check how accurate it was 2. you’re negatively impacting the ROI because the more money and time you spend on the accuracy of the figures, the less return you’ll naturally receive. It’s an exponential curve to be that much more sure” Tim Stewart

Tim relates the time spent to discovering this ‘value’ to lost opportunity cost. Perhaps, one of naval-gazing (Not navel gazing as Tim pointed out to me. This isn’t to do with belly buttons. I’m sure he’s coined that term to me on the phone in the past; most likely in one of his many descriptive essays).

Given moving averages and changing external variables, I wonder if the only true method of collective experiment attribution reporting is having a hold back group? Test and release all the changes you need to and have a hold back group of, say, 5% of users.

I don’t know of anyone that does this, personally, but, I’m curious; is this lost opportunity (of those 5%) for the purpose of being more precise with our attribution? We ask the question, once more, of the ‘alternatives to experimentation’…

“Find a common sense approach of what’s the area we want to be in. Do we have enough grit and courage, to commit ourselves to 80% significance level for our experiments or do we make 5x more experiments and, thus, calculate for false negatives and the costs of not implementing a winner?” André Morys

Tell Stories

Of course, you can’t just set the expectations, you need to continually manage them. Continually communicate. And in a way that’s engaging.

“At the end of the day, we’re all talking the same language — dollars, numbers and we’re all data driven. A lot of CFOs are willing to have those conversations. Their job is to figure out how to make the company more successful from a financial standpoint. So almost all of them are willing to work with you to help figure that out. With CFOs, I try to get them involved and excited in the project to see what we can prove from a financial standpoint” Matt Edgar

One of my favourite methods of communication to subvert the expectation of pure revenue, is to provide test results in a story-arc.

I’ve seen a lot of different test results — I admit, some of the best ones are within a presentation deck (despite how much presentation decks, suck, give this a read “the cognitive style of powerpoint pitching out corrupts within” ). At FKA User Conversion, we showcase results in a three-pronged attack of

  1. What did we learn
  2. What does it mean
  3. What next

… and then we provide the results. Giving this narrative gives you the opportunity to tell that story of what was observed and why.

“Define those parameters upfront. Remind people of every opportunity of what you’re working to. If they want to change those parameters — give them the cost; time or money.” Tim Stewart

Part 1 — Summary

“Let’s say you won 20 experiments on metrics other than revenue. I am confident that if you bundled all 20 of your winners together in a single test, you would then also see a meaningful impact on revenue.” Hazjier Pourkhalkhali

I think we need to question, once again, the purpose of experimentation. That being validation, not revenue generation. In other words, teams only release those which have already been deemed as sufficiently ‘successful’ and not quantifiably detrimental to the experience. As I’ve already mentioned, attributing revenue — at least in some form — is a necessity to some, if not most, if not all businesses.

How is that done? Well, I would recommend that the business determine this for themselves.

If they set budgets based on a business case, they will have to support this value themselves. Set those expectations around the accuracy of attribution, manage those expectations and reframe KPIs accordingly. To judge every business decision on revenue is nonsensical; consider the alternatives to experimentation and discuss loss as much as gain would be my core recommendations for reframing the communication stream.

Why are you doing what you’re doing…

“ It’s more important to me that we’re making the right product decisions at pace, rather than getting hung up on the specific pounds and pence calculations.” Matt Lacey

Part 2: Next Friday 21st. "Yes. We need to attribute revenue to experimentation. Here’s how (Part 2 - Zach Synder)"--- Sign up to my Substack to not miss a post https://optimisation.substack.com/ ---

This is part of a 4-article series on whether, and how, you can attribute revenue to experimentation:

  1. Why do we assume experimentation is about financial gain?
  2. No, you can’t accurately attribute, nor forecast, revenue to experimentation. Here’s why.
  3. Yes. We need to attribute revenue to experimentation. Here’s how: Part 1
  4. Yes. We need to attribute revenue to experimentation. Here’s how: Part 2

A quick note on methodology…

I wanted to quickly mention why I’m quoting people left right and centre for this particular article. Or why I’m even writing this.

Because this topic is contentious (it’s widely used, often wrongly), complex and contextual, in my mind, the only way to add objectivity is to crowd-source opinion. I did this back in 2019 as part of a piece of research to understand attribution. Those that have been kind enough to provide quotes have approved the use of that quote, in the past few weeks, and in some cases changed it.

I strongly believe in sharing news and practices about this topic and in doing so creating debate. Because I do feel that a prioritised view of revenue attribution has over-simplified the art of experimentation. It’s commercialised that methodology, and sometimes for personal gain or greed. In most cases, it has diminished the importance and investment of, what I would consider to be, one of the most valuable practices within business.

Hence, I’ve created the narrative from experience and undertook the research for it, but without the opinions of those who helped provide quotes, or review this, it wouldn’t have that level of objectivity. A huge thank you should go out to those people.

Right or wrong in my approach, I just want to help.

--

--

David Mannheim

Stories and advice within the world of conversion rate optimisation. Founder @ User Conversion. Global VP of CRO @ Brainlabs. Experimenting with 2 x children