A message to all businesses out there looking for experimentation vendors: I feel for you.

The selection available is unreal. It’s actually the same issue with most SaaS products. Not just in the sense of “I want a, say, behavioural analytics tool — which one do I go for?” but even in the sense of “what martech platforms should I have that make up my tech stack?”

It’s insane. It’s . That’s how many martech platforms there are to pick from in 2020. Over 200 of which are classified as optimisation, personalisation and testing.

I’ve been involved…

I’ve really been torn recently. And found it hard to articulate my paradox.

On one hand, I discussed in my last article, . It can create a fountain of knowledge and lead us on a path of discovery. Perhaps that path is not the perceived intention, but that’s what creates ‘the original’.

In my musings before that though, I mentioned how . I just don’t understand how we can test and learn about user behaviours, when the users themselves are not ready for such an innovation? 🤷🏼

Sit on…

“Experimentation-driven innovation will kill intuition and judgement”​. Stefan Thomke

This is the first of seven myths Stefan Thomke writes about in his book, Experimentation Works.

In my last , I argued that experimenting could do the exact opposite. That it restricts creativity and innovation just as much as it fosters it. Why? Well, I think there can be inherent bias within validation itself, given that users might not know any better or be ready for said innovation. It could be a timing thing or a “I don’t know what I don’t know”​ thing. Give it a read.

Most disagreed. Although…

TLDR; Experimentation has its place. Intuition has its place. There is inherent bias with validating the new be that a pre-conditioned way of thinking, fear or timing which is what I want to explore…

“Don’t experiment when you should think; don’t think when you should experiment” Jon Bentley

This is going to sound counter-intuitive coming from someone who has built his career up evangelising experimentation. And as someone who considers creativity one of his core values. But I’m not sure the two work that well hand in hand.

I genuinely feel that experimenting can restrict creativity and innovation just as…

TLDR; Start to re-frame the conversation about experimentation. Talk about expected loss as much as expected gain, use OKRs to focus and align, talk in ranges, use degradation, highlight your program quality metrics.

This is part of a 4-article series on whether, and how, you can attribute revenue to experimentation:

In the first part, we…

TLDR: Communication is key. Align and define on your purpose of experimentation for the business. Set your standards and therefore expectations. Define what accuracy is acceptable. And tell stories.

Woah, this is a long one. Sorry. So much so it’s a two-parter. I actually think the second part is better than the first part, too; this is Joss Whedon — all about communication — and next Friday is Zach Synder — the Batman.

This is part of a 4-article series on whether, and how, you can attribute revenue to experimentation:

TLDR; I don’t believe you should attribute revenue to individual experiments, but I get the fact that we have to; it’s fuzzy. I don’t believe you can accurately forecast that revenue attribution; it gets even fuzzier. And, when you collate experiments together to determine overall revenue impact? I certainly don’t believe you can attribute that type of revenue; that’s super fuzzy. Experiment results tell you which way you’re headed, not how far you’re going.

This is part of a 4-article series on whether, and how, you can attribute revenue to experimentation:

The over-simplification and virality of click-bait headlines for case studies designed for commercial gain and a wider audience has led to associating financial gain to an output. I blame Optimizely, the $300m button and WhichTestWon.

This is part of a 4-article series on whether, and how, you can attribute revenue to experimentation:

A beautiful juxtaposition…

TLDR; I am not a believer that you should test everything and, instead, it should have a swim-lane of prioritisation. Testing is a (brilliant) methodology for decision making. Factors that affect this inc. purpose, traffic, type of test and time.

Reminder; There’s no such thing as right or wrong, just contextualised experiences. I’ll be sure to give my opinion, with others, on questions people have asked at my . Please comment, like, share, debate, drink when you’re done. I’d love us learn more and share our experiences together to help our CRO community. …

TLDR; You can’t do it alone. It must be democratised in the form of structure and mindset. If you educate, encourage failure, gamify and demonstrate humility, I think you’ll take a good swing at it.

Every week, I answer the top voted question in my . There are currently 37 questions on there. Id love for you to upvote or contribute to ask a question about my experience or if I can crowd-source other opinions.

Please also leave a comment on this article and give me your opinion. …

David Mannheim

Stories and advice within the world of conversion rate optimisation. Founder @ User Conversion. Global VP of CRO @ Brainlabs. Experimenting with 2 x children

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store