Why experiment prioritisation is so damn difficult

David Mannheim
7 min readApr 9, 2021

TLDR; Prioritisation is bloody hard. That’s because we either over-simplify it, or try to append arbitrary and binary frameworks to the problem, or focus on outcomes rather than value, or don’t take into account iteration, or don’t take into account alignment with the wider business objectives, or isolate experiments into a pot ignoring the wider context of the ‘value add’. See. It’s bloody hard, isn’t it?

Most of my conversations go like this on a day-to-day basis.

“How are you”

“Busy. What about you?”

”Busy, too”

We’re always busy. There’s never enough “time” in the day. And as a father of a 3 year old tearaway and a new(ish) 8 week old baby, I concur. I’m super busy. But I couldn’t tell you with what. Apart from continually changing nappies.

Regardless of experimentation and AB testing, I just always find it fascinating that, baby or no baby, “busy” is the operative response to a fairly simple question.

In a business context, it’s the same response to how we prioritise, not just time, but product.

Everything is always on the backlog. There’s never enough resource. Or when we look at experiments, we focus on a very isolated element of a much wider product.

I would go as far to say that in every digital company I have ever spoken to — and we must be at over 400 by now — there is never enough resource to build. Not one of them has not wanted ‘more resource’, specifically development. If you’re running ~5 experiments per month, resource restricts you to run 10. If you’re developing ~12 features per month, resource restricts you to develop ~20.

Engineering resource is always the dodgy-knee in the marathon. Yet, it seems to be the answer to all problems of “busy” or “there’s too much to do”. When really whilst you can scale resource, unlike time, you’re still always going to have that problem of ‘there not being enough’

Here’s the newsflash. There’s never enough.

It’s relative. In a similar way to how a very wealthy person will always want more money, or how materialistically that person will continue to spend their means, a digital company will always desire more development resource. Because there’s always more to do or build.

So the question shouldn’t be about resource, but about prioritisation.

Learning why prioritisation is so hard gets us 50% of the way to understanding how to solve it. So let’s dive into that. And I know, I know, we’re all after “how do you prioritise experiments though, David”. That’s next week. I just think if we first understand why prioritisation is hard, specifically for experiments, we can carve out a toolbox that helps us.

Feedback loops

There are so many feedback loops. Too many.

Customers provide their ‘wish-list’. Sales teams provide their requirements to make better sales. Engineering teams provide tasks that are needed to make the product more resilient. Designers provide great ideas for a better, more intuitive UX/UI. And that’s not forgetting those pesky HIPPOs that want something yesterday (that’s not intended to be derogatory. Calling someone in a position of power a malicious hungry dominant beast of an animal, when really their opinion is representative of their expected decision making, is just rude. But I’ll roll with it for now)

And the main problem with feedback loops? Or the lack of alignment of feedback loops? They are isolated to ideas.

As humans were creative by virtue, and we continually yearn for creativity, ideas, innovation and, linked to that, acceptance. When it comes to experimentation, as it’s a creative output, we produce ideas. But such ideas are often not focussed or aligned to the wider context of what the business is trying to achieve. And as such, you get ideas like bloody sticky filters*

Done. Next.

As well as being virtuously creative, as humans we like to tick off things to do. And as such, we move from one to the next.

But when it comes to experimentation, iteration is the sole foundation of value. We measure and learn from what we build. And because of that we have the gift to iterate on what we build.

As an experimenter, one of my core beliefs is that we should focus on just a few hypotheses and iterate within those hypotheses to find the most impactful execution. Remember, a hypothesis is a theory and, in my opinion, should not go as deep into the weeds as the execution.

Thus, the thinking being that, is that there are hundreds of executions for a single given hypothesis.

In the instance of one of our clients at User Conversion, 50% of all the 200+ experiments we built for them were iterations from one another. In one of those instances, a checkout based experiment, we created 80% improved value from the original hypothesis by iterating on top of it. We knew the “concept” worked, but by altering the execution, over 6 different iterations might I add, we saw an incremental gain of over 80% on the original.

The problem with this is that it muddies the product prioritisation pool. You inherently promise incremental predictable value, release after release, test after test, and being able live up to that promise is extremely important in order to sustain customer trust. As such, prioritisation should be a moving feat — never static and that’s just hard work.

Poor prioritisation models

There are a tonne of prioritisation models out there to help. Moscow. RICE. ICE. PIE. KANO. PXL. [insert another random acronym here. Mmmm. Pie.]

But I find the majority of them have 3 x inherent problems:

They lack alignment to the wider context of the business. Prioritisation should be top down, focussing on the business mission first, business objectives second, and so forth. Most prioritisation models focus on the ‘execution’ i.e the very last thing within a triangal-y-hierarchy-diagram-thing of execution on the base, concept, user problem, product objectives, business objectives and mission at the top.

They promote solutions that hold little risk. By using “effort” as a scoring factor, we are basically saying “you know those juicy features that could really solve our user problems? Well, because they are complex to build, despite them potentially being the most impactful, we’ll place them to back of the queue. And instead focus on the iddy-biddy things that need doing because they are…here comes the word….easy”. Perhaps that’s a little facetious.

And lastly, they’re super subjective. We append arbitrary and subjective scores to each attribute. Take ‘confidence’ within the “ICE” model. Andrea Says said it better than I ever could…

There is no way you can know the reach, impact, or effort on most things without properly having vetted if you’re even working on the right things, even less if you haven’t spoken to anyone about it. So how could you possible have any confidence?

Simplicity

I urge people to simplify and find a prioritisation process that works for you. It doesn’t even have to be a model.

Growth Hackers place their hat on ICE. CXL hang their hat on PXL. WiderFunnel place theirs on PIE. Optimizely put it on IE, without the C (which, to be fair, adds a level of standardisation around what “impact” and what “effort” means or is. I do like that personally)

Despite the need for prioritisation simplicity, I’m not the biggest fan of simplifying something that’s inherently complex down to a series of acronyms, often shoehorned to make the acronym sound better. PIE does sound good, acronym or no acronym.

They lack alignment and are based off inherently subjective, and therefore arbitrary, scores.

It wasn’t until I read CXLs blog post on prioritisation that really aligned with my thinking — but I was so disappointed in the output of such logic. A binary-score mechanism that bases prioritising experiments on usability or behavioural attributes such as “above the fold” or “noticeable within 5 seconds” seems so immature.

And there’s the answer as to why prioritisation is so hard…

In true, opinion-led post stylee, it’s contextual. The main purpose of this is article is to explain why prioritisation is so much of a pain; because that inherently will help you figure out what prioritisation process to use.

Simplifying prioritisation with such frameworks denotes that there’s a generalised framework to append to each business. When really, the frameworks are there just to help given situations. I think this post does says it best when applying frameworks to situations

The things to remember are:

  1. frameworks are great, but, there’s no ‘one model’ that works for everyone. I know of a couple of companies that use different models to prioritise different things — customer service tickets, experiments, product features — and that’s OK.
  2. Context is king. Of the million models available (here’s a list of about 20 of them by the way…), pick what’s right for you and the business and don’t try to change your systems and processes to match a model. Try to match the model to you systems and processes. Test and learn from that and then iterate on it.
  3. When people prioritise experiments, they do so in an isolated fashion. An experiment is just an output and to prioritise an output you need to prioritise everything that comes before the output.

Next up. How do you prioritise experiments? (the proper answer)

*y’all know I hate sticky filters — easily the most common, usability orientated

One last thing … I’d love it if you could complete one of the below. It really helps me to get feedback, speak to people, create relationships and learn from others.

  1. Ask questions and vote https://app.sli.do/event/orj9eqab/live/questions. I answer these on a weekly basis by taking the top voted topic and writing about it with my stories and experience.
  2. Subscribe to my newsletter at https://optimisation.substack.com/
  3. Leave a comment. Create debate. Have a drink.

--

--

David Mannheim

Stories and advice within the world of conversion rate optimisation. Founder @ User Conversion. Global VP of CRO @ Brainlabs. Experimenting with 2 x children