A message to all businesses out there looking for experimentation vendors: I feel for you.

The selection available is unreal. It’s actually the same issue with most SaaS products. Not just in the sense of “I want a, say, behavioural analytics tool — which one do I go for?” …


I’ve really been torn recently. And found it hard to articulate my paradox.

On one hand, I discussed in my last article, I believe that AB testing encourages innovation. It can create a fountain of knowledge and lead us on a path of discovery. …


“Experimentation-driven innovation will kill intuition and judgement”​. Stefan Thomke

This is the first of seven myths Stefan Thomke writes about in his book, Experimentation Works.

In my last article, I argued that experimenting could do the exact opposite. That it restricts creativity and innovation just as much as it fosters…


TLDR; Experimentation has its place. Intuition has its place. There is inherent bias with validating the new be that a pre-conditioned way of thinking, fear or timing which is what I want to explore…

“Don’t experiment when you should think; don’t think when you should experiment” Jon Bentley

This is…


TLDR; Start to re-frame the conversation about experimentation. Talk about expected loss as much as expected gain, use OKRs to focus and align, talk in ranges, use degradation, highlight your program quality metrics.

This is part of a 4-article series on whether, and how, you can attribute revenue to experimentation:


TLDR: Communication is key. Align and define on your purpose of experimentation for the business. Set your standards and therefore expectations. Define what accuracy is acceptable. And tell stories.

Woah, this is a long one. Sorry. So much so it’s a two-parter. I actually think the second part is better…


TLDR; I don’t believe you should attribute revenue to individual experiments, but I get the fact that we have to; it’s fuzzy. I don’t believe you can accurately forecast that revenue attribution; it gets even fuzzier. And, when you collate experiments together to determine overall revenue impact? I certainly don’t…


The over-simplification and virality of click-bait headlines for case studies designed for commercial gain and a wider audience has led to associating financial gain to an output. I blame Optimizely, the $300m button and WhichTestWon.

This is part of a 4-article series on whether, and how, you can attribute revenue…


TLDR; I am not a believer that you should test everything and, instead, it should have a swim-lane of prioritisation. Testing is a (brilliant) methodology for decision making. Factors that affect this inc. purpose, traffic, type of test and time.

Reminder; There’s no such thing as right or wrong, just…


TLDR; You can’t do it alone. It must be democratised in the form of structure and mindset. If you educate, encourage failure, gamify and demonstrate humility, I think you’ll take a good swing at it.

Every week, I answer the top voted question in my Slido. There are currently…

David Mannheim

Stories and advice within the world of conversion rate optimisation. Founder @ User Conversion. Global VP of CRO @ Brainlabs. Experimenting with 2 x children

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store