Experimentation platform selection is really, really hard

A message to all businesses out there looking for experimentation vendors: I feel for you.

The selection available is unreal. It’s actually the same issue with most SaaS products. Not just in the sense of “I want a, say, behavioural analytics tool — which one do I go for?” but even in the sense of “what martech platforms should I have that make up my tech stack?”

It’s insane. It’s 7,040 levels of insane. That’s how many martech platforms there are to pick from in 2020. Over 200 of which are classified as optimisation, personalisation and testing.

I’ve been involved in a lot of these discussions because of the breadth of experience I’ve personally had with each one, it tends to be the number one question I get asked. “Which one is best”. Scratch that. It’s definitely the number one question. So much so I even wrote a white paper on this very approach and subject with Damian at User Conversion.

That’s not me tooting my own horn, but I feel I’ve learnt one significant truth with these platforms.

They’re all very samey with samey features packaged up in a less-so-samey way. Some have more features than others, some better features than others. But the understanding of your business challenges is the most important thing to communicate when in a vendor selection process. It allows those vendors to respond with contextually-specific, persuasive answers on how their platform can support your journey.

Otherwise, what I often see happening is that the organisation will ask for a list of features. Too many. Non-prioritised because the list is a democratic request. Vendors will have 90% of them. Vendors will then seduce the organisation with sexy features that weren’t asked for. Ahem, personalisation. We then end up in feature-tennis. This over-stimulation of feature wars means that — really — the only differentiation, then, either becomes price, how many sales people the vendor has to pester you.

It’s harsh but true.

This isn’t dissing those platforms. Actually, I want to help both sides of the party. If anything, this is a plea to businesses selecting an experimentation platform. It’s a difficult market to navigate and I really feel for you. Wade through that sea of sameness. You can create platform differentiation by thinking about selecting a vendor in the same way you’d select a consultancy. Provide your challenges with context and allow the vendor to respond on how their platform specifically can help solve those challenges.

What are my primary vendor considerations

Here are the list of questions I ask a business when trying to understand what platform will be best for them.

1 — Definition: Why do you experiment?

I think this is one of the best questions to ask. Not only does it make people think about the why this vendor selection process is happening, but it’s a broad enough question to set the scene.

Truly understanding the purpose of experimentation.

Whilst the evangelists will state the true purpose of AB testing is validation, the real purpose might be something different. Increase conversion rates. Personalise behaviours. Marketing promos. Bypassing development (yes, it’s a thing). Creating a test and learn culture to…tick a box (yes that’s also a thing)

This can influence your decision in which platform. Some focus on their easy to use UI for the non-coders out there. Some have a commercial arrangement based on volume of usage meaning that you should understand how many tests you’re producing now and what that looks like in 12 months, too.

2 — People: Who experiments now and who do you want to experiment in the future?

Is your experimentation process centralised or decentralised? Does it sit with marketing, product or engineering? Knowing the resources available, their skill sets can help you understand the maturity of platform that you’re after. Decentralised model where engineers are heavily involved and running hundreds of tests a month? You might want a platform that’s slightly more enterprise, has great permission allowances and a versioning system within the code editor. That’s more important than, say, a really nice UI to a WYSIWYG editor.

3 — Process: What does your experimentation process currently look like?

Do you have an experimentation platform currently — what are its limitations since using it and why was it selected? Is it centralised by one team, or decentralised in product teams? How mature is your experimentation program currently? ← I think the Optimizely maturity quiz is the best to answer this.

4 — Technical: What does your tech stack look like and how do you want to experiment

Some companies might be embedded in the Adobe stack. Or completely wedded to a behavioural analytics tool like Content Square or Segment. Or run everything through JIRA. What are the integrations really like with the platforms? And yes, everyone offers these integrations more or less, but give me examples and detail of how to make our life easier.

Not to mention how the site is built. Clean code bases. Site structure. Underlying technologies — for example React is a beast to experiment within and some platforms from experience handle it much better than others.

Equally, where experimentation sits (product vs marketing vs engineering, centralised vs decentralised) and how tests are created are vital. Is server side required? Who builds tests — do we need marketing to add banners to bypass development (I know, I know, sad face). But if so, we need a simple WYSIWYG editor — some being far better than others because UI is an easy differentiator.

If you’re not on the technical side and don’t know most of what I’m talking about, sit down with your engineers. You’ll waste a lot of time in your selection process if your chosen tool ultimately won’t work with your existing stack — and in many cases, it might not. But, a word of warning, I’ve seen engineering teams “take over” the process. They should be heavily involved, especially if that’s where experimentation sits. But democratising experimentation where most if not all evangelise it, is the only route to success.

5 — Personalisation: How important is user or feature personalisation?

Everyone will say it’s super important, we know this. But everyone will also have different definitions of personalisations, too. My first recommendation is to align in your definition of personalisation (and please don’t say 121).

As a pre-requisite it’s vital to understand the maturity of the business currently, and the purpose of experimentation before determine how really important it actually is. So more detailed questions like of the tests that you run currently, how many are ran at a segment level? What are the OKRs and vision for the business in the next 3 years and is personalisation part of that strategy? Trying to test the company to see how important personalisation really is, apart from just lip-service.

As a bonus, I’d also consider the level of customer support or service you think you’ll need. There’s technical support, sure, but what about ‘strategic consultancy’. I’d be wary of any vendor offering consultancy support — that’s a whole topic altogether. And I won’t even claim to be able to articulate it better than Steven Pavlovich’s “Managed Services Suck” musings. How much and what type of support you need as a business depends on your maturity, resources and skill set. Inform the vendor of this, contextually, to allow them to respond appropriately note: not upsell.

What type of vendors are there?

We categorise vendors into 4 segments. Some platforms naturally straddle categories. Some platforms want to categorise themselves as all 4 (!) (I’ve had those conversations, they’re not pretty) There will be some cross-over. And therefore, this isn’t hard and fast. But I think what these categories do is try to link your maturity, ambition and your tech architecture to a category.

  • Enterprise Tech Stack. These are your Adobes (Target) and your Oracles (Maxymiser) and perhaps your Googles (Optimize), where being embedded within a tech stack is super important and efficient
  • Pure and Mature Experimentation. The platforms that are pure players within the market
  • Wider experimentation suites. There’s a bit more than just experimentation here, it might be heatmaps, or session recording functionality.
  • Personalisation suites. These platforms pride themselves on a personalisation proposition and are more pure-play in this area.

I think it’s important to highlight the personalisation suite as independent because, as experimentation becomes a broader need in the market, so too does personalisation. The two are different, albeit inextricably interlinked. I’ve had conversations with companies where personalisation suite X is awesome at personalisation, but not as good at experimentation as experimentation platform Y. So what’s more important to you? There are companies I know that have a personalisation suite and an experimentation platform. Again, we come back to the ‘purpose of experimentation’ for your business.

I’ll also just mention the trend of experimentation platforms becoming more CDPs (customer data platforms) that give them this ability to hold data, and personalise on it’s behalf. We have the acquisition streak of KIBO with Monetate, or EpiServer with Optimizely, or Oracle’s Unity platform. We even see the other way around with Exponea, a CDP, introducing their experimentation platform. I think this is an interesting set of moves, making AB testing more involved

If you’re after more information you can read the whitepaper we wrote here.

If I were you…

If I were you, I’d whittle down my choices to <5 vendors. Ideally staying within the categorisation structure above based on your maturity, ambition and tech architecture.

Base your decision from similar companies that you’re aware of, or go to agencies or consultants that can offer, mostly, impartial advice. Note: it will likely be different than the format above, this is just my experience. Also note; some agencies do receive a kick back from said vendors so do ask prior to remove potential bias.

Then I would write the RFP as a series of problem statements. Give the vendors context and challenges to solve. Think about questions framed like the below, encouraging long for, descriptive responses.

  • Our code base is react and we’re looking at optimisation experiences for our users across web and app. We have an agile, disciplined development process and need to ensure any tech is fully aligned with the engineering department goals, without hindering site speed. How will your platform impact our site and how might we prevent daily conflicts with our releases?
  • We’re just starting our experimentation journey. We have dabbled with a few tests in Google Optimize and have ambitions to run >100 tests in the next 12 months. We have 1 x CRO manager, a data analyst and an engineering team of 8. How can you help us grow this velocity and evangelise how AB testing can help us to our senior management team?


I feel like I’ve been a bit unfair on experimentation platforms. All I’m trying to do is help based on my experience. And that might involve me revealing some ugly truths. Even more so if you read that bonus section below. And it will almost definitely mean I get in trouble. But what the heck. I care more about supporting our industry through justice, more so than being trolled.

I’m trying to help create a clear method of differentiation, because. frankly, currently, there is very little in a saturated market. We need to allow businesses to make a more informed, less-bias decision on which platform is best for them; to do that we must explain how your tech can help solve their individual challenges. For that to happen, businesses must provide context and explain what those challenges are. The same goes for any vendor, not just experimentation. Indeed, the same goes for agencies. The more information you can provide, the better. A 30 minute phone call wont suffice.

I don’t think it’s rocket science, nor difficult. I just feel we get mesmerised by the overwhelming amount of features available and the price conversation. There should be some form of regulatory or, at least, standardised method of selecting a platform.

In my opinion, to do that, we need to help these vendors by telling them what the actual challenges are with the businesses. That way they can truly differentiate by explaining — not just that they have a WYSIWYG — but how that can help assist the marketing team because of it’s ease of use, and easy to select divs. Or not just that they have permissions and version controls, but how businesses who have a centre of excellence can easily review the code or test set up of their product teams and revert back if necessary.

Challenge vendors by giving them enough context to help you.

Bonus: The problem with price

I didn’t know whether to write this. It’s a tangent from the above and I think it’ll get me in trouble. But screw it. It’s important.

Price is often a core differentiator between all vendors (not just experimentation, but particularly so). But I often feel as though pricing models are inconsistent. And it would appear I’m not alone in this statistically insignificant and probably bias’d poll…

Business development teams are commissioned per sale; it’s in their personal interest to sell you their platform. And I’m not saying this happens — but I can see it being at the expense of what’s right for you as a business. That’s a whole new conversation on greed I won’t entertain publicly. Yet.

I do find it incongruent that for the same service price differs — a lot. I’ve seen companies been quoted £300k p.a. and then suddenly, because it’s the end of the quarter, they’re quoted £200k p.a. Or for sexy brand names and case studies, the price suddenly drops by £40k. Or as soon as the competition heats up for a brand, vendors will willingly match others. There seems to be a lot of give in the price. I was even privy to one discussion where just 30 minutes before a big meeting with a prospect, the vendor made up the figures with the rationale “they can afford it”.

Some pricing models are based on performance. Some on usage. Both of which I find bizarre.

I also find it hard to accept that all platforms want to move more “enterprise” because at the end of the day that’s where the money is. I won’t name names, but there are at least 12 platforms I can name that have moved away from their entry price proposition, toward a more expensive proposition. This usually happens when they have had external investment in their business and forced to make more profit — but how can they seemingly justify doubling prices from one year to the next?

It happens. Unethically so. This isn’t knocking platforms. Much. Yes, I’d like to see some of their methods disrupted, but they do have a job of promoting and selling their platform. I just feel that sometimes those methods can swerve into defamation given the experimentation market is more about substitution, rather than adoption — especially when it’s as price orientated as it is.

Don’t forget

  1. Please feedback on this article — was it helpful? Awesome? Crap? https://forms.gle/eX9ztvxDXSdFqJ9CA
  2. Ask questions and vote https://app.sli.do/event/orj9eqab/live/questions. I answer these on a weekly basis by taking the top voted topic and writing about it with my stories and experience.
  3. Subscribe to my newsletter at https://optimisation.substack.com/
  4. Leave a comment. Create debate. Have a drink.

Stories and advice within the world of conversion rate optimisation. Founder @ User Conversion. Global VP of CRO @ Brainlabs. Experimenting with 2 x children